Contenu connexe Similaire à Tokyo r #43 (20) Plus de Akifumi Eguchi (19) Tokyo r #432. 自己紹介
Twitter ID: @siero5335
仕事: 某大学で
化学物質曝露影響の解析
測定法の開発してます
専門: 環境化学、分析化学
測定結果の解析に統計を使用
5. Stan
メリット
NUTS, HMCによる高速な処理
コミュニティが活発
自由なモデリング
No-U-Turn samplerの詳細をいじる必要がない
デメリット
確率分布の指定が難しい(JAGSよりは簡単?)
コンパイルにかかる時間がそれなりに長い
http://www.sumsar.net/blog/2013/06/three-ways-to-run-bayesian-models-
in-r/ 参照
他になんか良さそうなinterface無いの?
7. LaplacesDemon
Rで記述されたBayesian Laplacian interface
㔞䛾MCMC䜰䝹䝂䝸䝈䝮䚸☜⋡ศᕸ䚸ゎᯒ䜢ෆⶶ
䝷䝥䝷䝇㏆ఝ,
hHp://www.bayesian-‐inference.com/soOwaredownload䚷䜘䜚ධᡭྍ⬟
Tutorial
hHp://www.icesi.edu.co/CRAN/web/packages/LaplacesDemon/
vigneHes/LaplacesDemonTutorial.pdf
Examples
hHp://www.icesi.edu.co/CRAN/web/packages/LaplacesDemon/
vigneHes/Examples.pdf
ኚศ䝧䜲䝈,
୪ิィ⟬,
䝧䜲䝈䝣䜯䜽䝍䞊䛻ᑐᛂ
䝟䝷䝯䞊䝍䞉ኚᩘ㔜せᗘ⟬ฟ,
ឤᗘศᯒ,
ྛ✀䝥䝻䝑䝖䛺䛹䛻ᑐᛂ
8. LaplacesDemon
メリット
Rのみで記述されている
Rに内蔵されている確率分布を使える
MCMCアルゴリズムが充実している
対応している手法も充実している
デメリット
RなのでJAGS, Stanに比べると遅い(rcppに移す作業中
JAGS, Stanほど自由にモデリングはできない?
尤度などがRのfunctionで表現できるものに縛られる
コミュニティが小さく発展途上
10. 例題データ
(demonsnacks)
㼞㼛㼣㻚㼚㼍㼙㼑㼟 㻿㼑㼞㼢㼕㼚㼓㻚㻿㼕㼦㼑㻯㼍㼘㼛㼞㼕㼑㼟 㼀㼛㼠㼍㼘㻚㻲㼍㼠 㻿㼍㼠㼡㼞㼍㼠㼑㼐㻚㻲㼍㼠 㻯㼔㼛㼘㼑㼟㼠㼑㼞㼛㼘 㻿㼛㼐㼕㼡㼙 㼀㼛㼠㼍㼘㻚㻯㼍㼞㼎㼛㼔㼥㼐㼞㼍㼠㼑 㻰㼕㼑㼠㼍㼞㼥㻚㻲㼕㼎㼑㼞 㻿㼡㼓㼍㼞㼟 㻼㼞㼛㼠㼑㼕㼚
㻭㼜㼜㼘㼑 㻝㻞㻡 㻢㻡 㻜 㻜 㻜 㻝 㻝㻣 㻟 㻝㻟 㻜
㻭㼜㼞㼕㼏㼛㼠㼟㻚㻰㼞㼕㼑㼐 㻞㻡㻜 㻞㻝㻟 㻜 㻜 㻜 㻝㻜 㻡㻡 㻣 㻠㻥 㻟
㻮㼍㼚㼍㼚㼍㻚㻯㼔㼕㼜㼟 㻤㻡 㻠㻠㻝 㻞㻥 㻞㻡 㻜 㻡 㻡㻜 㻣 㻟㻜 㻞
㻮㼍㼚㼍㼚㼍 㻞㻞㻡 㻞㻜㻜 㻝 㻜 㻜 㻞 㻡㻝 㻢 㻞㻤 㻞
㻮㼑㼑㼒㻚㻶㼑㼞㼗㼥 㻞㻜 㻤㻞 㻡 㻞 㻝㻜 㻠㻠㻟 㻞 㻜 㻞 㻣
㻮㼑㼑㼞㻚㻸㼕㼓㼔㼠 㻞㻥 㻥 㻜 㻜 㻜 㻝 㻜 㻜 㻜 㻜
㻮㼕㼠㻚㻻㻚㻴㼛㼚㼑㼥 㻠㻜 㻝㻡㻜 㻟 㻞 㻜 㻝㻝㻤 㻟㻞 㻜 㻝㻥 㻝
㻯㼍㼞㼞㼛㼠㼟㻚㻮㼍㼎㼥 㻝㻡 㻡 㻜 㻜 㻜 㻝㻞 㻝 㻜 㻝 㻜
㻯㼔㼑㼞㼞㼕㼑㼟 㻝㻡㻡 㻣㻣 㻜 㻜 㻜 㻡 㻝㻥 㻞 㻝㻟 㻞
㻯㼞㼍㼚㼎㼑㼞㼞㼕㼑㼟㻚㻰㼞㼕㼑㼐 㻠㻜 㻝㻞㻟 㻝 㻜 㻜 㻝 㻟㻟 㻞 㻞㻢 㻜
㻰㼍㼠㼑㼟 㻞㻠 㻢㻢 㻜 㻜 㻜 㻜 㻝㻤 㻞 㻝㻢 㻜
㻲㼞㼕㼠㼛㼘㼍㼥 㻞㻤 㻝㻟㻣 㻢 㻝 㻜 㻥㻞 㻝㻥 㻞 㻞 㻞
㻳㼞㼍㼚㼛㼘㼍㻚㻮㼍㼞 㻞㻠 㻝㻝㻢 㻢 㻝 㻜 㻢㻤 㻝㻡 㻝 㻜 㻞
㻳㼞㼍㼜㼑㼟 㻥㻞 㻢㻞 㻜 㻜 㻜 㻞 㻝㻢 㻝 㻝㻡 㻝
㻶㼑㼘㼘㼥㼎㼑㼍㼚㼟 㻝㻝 㻠㻝 㻜 㻜 㻜 㻡 㻝㻜 㻜 㻤 㻜
㻹㼍㼚㼓㼛㼟 㻝㻢㻡 㻝㻜㻣 㻜 㻜 㻜 㻟 㻞㻤 㻟 㻞㻠 㻝
㻹㻒㻹㼟 㻞㻜㻤 㻝㻜㻞㻟 㻠㻠 㻞㻣 㻞㻥 㻝㻞㻣 㻝㻠㻤 㻢 㻝㻟㻞 㻥
㻹㼍㼞㼟㼔㼙㼍㼘㼘㼛㼣㼟 㻡㻜 㻝㻡㻥 㻜 㻜 㻜 㻠㻜 㻠㻝 㻜 㻞㻥 㻝
㻺㼡㼠㼟㻚㻹㼕㼤㼑㼐 㻝㻠㻞 㻤㻣㻢 㻤㻜 㻝㻞 㻜 㻡㻥㻡 㻟㻜 㻝㻟 㻢 㻞㻠
㻺㼡㼠㼟㻚㻼㼕㼟㼠㼍㼏㼔㼕㼛 㻝㻞㻟 㻣㻜㻞 㻡㻣 㻣 㻜 㻝㻞 㻟㻠 㻝㻟 㻝㻜 㻞㻢
䈈
39行 × 11列のデータ
カロリーを予測するモデルを作る
14. どんな書き方?
Fit
-‐
LaplacesDemon(Model,
Data=MyData,
Iniaal.Values,
Covar=NULL,
Iteraaons=2000,
Status=100,
Thinning=2,
Algorithm=AMWG,
Specs=list(Periodicity=10))
15. 入力Data作成
Fit
-‐
LaplacesDemon(Model,
Data=MyData,
Iniaal.Values,
Covar=NULL,
Iteraaons=2000,
Status=100,
Thinning=2,
Algorithm=AMWG,
Specs=list(Periodicity=10))
data(demonsnacks)
N-‐
nrow
(demonsnacks)
#⾜ᩘ䛾ᣦᐃ
J
-‐
ncol
(demonsnacks)
#ิᩘ䛾ᣦᐃ
y
-‐
log(demonsnacks$Calories)
#┠ⓗኚᩘ
X
-‐
cbind(1,
as.matrix(demonsnacks[,c(1,
3:10)]))
#ㄝ᫂ኚᩘ
for
(j
in
2:J)
{X[,j]
-‐
CenterScale(X[,j])}
#୰ᚰ
mon.names
-‐
c(“LP”,“sigma”)
#ṧᕪ䛾㡯㏣ຍ䠛
parm.names
-‐
as.parm.names(list(beta=rep(0,J),
log.sigma=0))
#ᶆ‽
MyData
-‐
list(J=J,
X=X,
mon.names=mon.names,
parm.names=parm.names,
y=y)
16. Model入力:
Linear
modelの場合
Fit
-‐
LaplacesDemon(Model,
Data=MyData,
Iniaal.Values,
Covar=NULL,
Iteraaons=2000,
Status=100,
Thinning=2,
Algorithm=AMWG,
Specs=list(Periodicity=10))
Model
-‐
funcaon(parm,
Data)
{
beta
-‐
parm[1:Data$J]
sigma
-‐
exp(parm[Data$J+1])
beta.prior
-‐
sum(dnormv(beta,
0,
1000,
log=TRUE))
#↓ሗ๓ศᕸ
sigma.prior
-‐
dhalfcauchy(sigma,
25,
log=TRUE)
#ᶆ‽೫ᕪ䛾๓ศᕸ
mu
-‐
tcrossprod(Data$X,
t(beta))
LL
-‐
sum(dnorm(Data$y,
mu,
sigma,
log=TRUE))
#ᑐᩘᑬᗘ
LP
-‐
LL
+
beta.prior
+
sigma.prior
#ᚋศᕸ
Modelout
-‐
list(LP=LP,
Dev=-‐2*LL,
Monitor=c(LP,sigma),
yhat=rnorm(length(mu),
mu,
sigma),
parm=parm)
return(Modelout)
}
17. 初期値の設定
Fit
-‐
LaplacesDemon(Model,
Data=MyData,
Iniaal.Values,
Covar=NULL,
Iteraaons=2000,
Status=100,
Thinning=2,
Algorithm=AMWG,
Specs=list(Periodicity=10))
Iniaal.Values
-‐
c(rep(0,J),
log(1))
今回は特に事前情報がないので適当な値を入れていく。
ランダムに適当な初期値をセットするための
GIV
(Generate
ini9al
values)
関数というのもあるようだが…
チュートリアルに使い方が載ってないので割愛
18. その他設定
Fit
-‐
LaplacesDemon(Model,
Data=MyData,
Iniaal.Values,
Covar=NULL,
Iteraaons=2000,
Status=100,
Thinning=2,
Algorithm=AMWG,
Specs=list(Periodicity=10))
䚷䚷䚷㻌Cover:
䝴䞊䝄䛜ศᩓ䚸ඹศᩓ䝧䜽䝖䝹䜢ᣦᐃ䛧䛶䛔䜛䛛䛹䛖䛛
Iteraaons:
ఱᅇMCMC䝃䞁䝥䝸䞁䜾䛩䜛䛛
Status:
ఱ䝃䞁䝥䝸䞁䜾䛤䛸䛻䝁䞁䝋䞊䝹䛻䝯䝑䝉䞊䝆䜢⾲♧䛩䜛䛛
Thinning:
ィ⟬㧗㏿䛾䛯䜑䛾㛫ᘬ䛝⋡
Algorithm:
MCMC䜰䝹䝂䝸䝈䝮䛾㑅ᢥ
Specs:
MCMC䜰䝹䝂䝸䝈䝮䛾䝏䝳䞊䝙䞁䜾
AMWGἲ䛾ሙྜ䛻䛿periodicity䠄࿘ᮇ䠅䛾タᐃ䛜ᚲせ
䚷䚷䚷
10ᅇ䛾ㄞ䜏㎸䜏䛤䛸䛻ୖᡭ䛟ᨵኚ䛧䛶ᙜ䛶䛿䜑䠛
䚷䚷䚷䛭䛾䛾䜰䝹䝂䝸䝈䝮䛾ሙྜ䛿Adapave䛺䛹䜢タᐃ
19. MCMCアルゴリズム色々
Fit
-‐
LaplacesDemon(Model,
Data=MyData,
Iniaal.Values,
Covar=NULL,
Iteraaons=2000,
Status=100,
Thinning=2,
Algorithm=AMWG,
Specs=list(Periodicity=10))
䚷䚷䚷
Metropolis-‐within-‐Gibbs
(MWG)
Mulaple-‐Try
Metropolis
(MTM)
No-‐U-‐Turn
Sampler
(NUTS)
Precondiaoned
Crank-‐Nicolson
(pCN)
Oblique
Hyperrectangle
Slice
Sampler
(OHSS)
Random
Dive
Metropolis-‐Hasangs
(RDMH)
Random-‐Walk
Metropolis
(RWM)
Reflecave
Slice
Sampler
(RSS)
Refracave
Sampler
(Refracave)
Reversible-‐Jump
(RJ)
Robust
Adapave
Metropolis
(RAM)
Sequenaal
Adapave
Metropolis-‐within-‐Gibbs
(SAMWG)
Sequenaal
Metropolis-‐within-‐Gibbs
(SMWG)
Slice
Sampler
(Slice)
Stochasac
Gradient
Langevin
Dynamics
(SGLD)
Tempered
Hamiltonian
Monte
Carlo
(THMC)
t-‐walk
(twalk)
Univariate
Eigenvector
Slice
Sampler
(UESS)
Updaang
Sequenaal
Adapave
Metropolis-‐within-‐Gibbs
(USAMWG)
Updaang
Sequenaal
Metropolis-‐within-‐Gibbs
(USMWG)
Adapave
Direcaonal
Metropolis-‐within-‐Gibbs
(ADMG)
Adapave
Griddy-‐Gibbs
(AGG)
Adapave
Hamiltonian
Monte
Carlo
(AHMC)
Adapave
Metropolis
(AM)
Adapave
Metropolis-‐within-‐Gibbs
(AMWG)
Adapave-‐Mixture
Metropolis
(AMM)
Affine-‐Invariant
Ensemble
Sampler
(AIES)
Componentwise
Hit-‐And-‐Run
Metropolis
(CHARM)
Delayed
Rejecaon
Adapave
Metropolis
(DRAM)
Delayed
Rejecaon
Metropolis
(DRM)
Differenaal
Evoluaon
Markov
Chain
(DEMC)
Ellipacal
Slice
Sampler
(ESS)
Gibbs
Sampler
(Gibbs)
Griddy-‐Gibbs
(GG)
Hamiltonian
Monte
Carlo
(HMC)
Hamiltonian
Monte
Carlo
with
Dual-‐Averaging
(HMCDA)
Hit-‐And-‐Run
Metropolis
(HARM)
Independence
Metropolis
(IM)
Interchain
Adaptaaon
(INCA)
Metropolis-‐Adjusted
Langevin
Algorithm
(MALA)
Metropolis-‐Coupled
Markov
Chain
Monte
Carlo
(MCMCMC)
hHp://www.bayesian-‐inference.com/mcmc
䛻ྛ䜰䝹䝂䝸䝈䝮䛾≉ᚩグ㍕
20. 書き方まとめ
Data,
model,
初期値についてはじめに記述
それぞれをベクトルとして保存
Fit
-‐
LaplacesDemon(Model,
Data=MyData,
Iniaal.Values,
Covar=NULL,
Iteraaons=XX,
Status=XX,
Thinning=XX,
Algorithm=”XX,
Specs=list(Periodicity=10))
上記式のXX部分を埋めてモデルを完成させる
動かす!
21. 書き方まとめ
Data,
model,
初期値についてはじめに記述
それぞれをベクトルとして保存
Fit
-‐
LaplacesDemon(Model,
Data=MyData,
Iniaal.Values,
Covar=NULL,
Iteraaons=XX,
Status=XX,
Thinning=XX,
Algorithm=”XX,
Specs=list(Periodicity=10))
上記式のXX部分を埋めてモデルを完成させる
動かす!
Modelの方は線形回帰以外に何ができるの?
22. 他にどんなモデルがつくれる?(一部略)
ANCOVA
MANOVA
ANOVA,
One-‐Way
Missing
Values:
1,
2,
3
Approximate
Bayesian
Computaaon
(ABC)
Mixture
Model
ARCH-‐M(1,1)
Mulalevel
Model
Autoregressive
Condiaonal
Heteroskedasacity,
ARCH(1,1)
Mulavariate
Laplace
Regression
Autoregressive
Moving
Average,
ARMA(1,1)
Mulavariate
Regression
Beta-‐Binomial
Panel,
Autoregression
Poisson
Binary
Logit
Penalized
Spline
Regression
Binomial
Probit
Poisson-‐Gamma
Regression
Change
Point
Regression
Polynomial
Regression
Cluster
Analysis
Power
Priors
Condiaonal
Autoregression
(CAR),
Poisson
Quanale
Regression
Dynamic
Linear
Model
(DLM):
1,
2,
3,
4
Ridge
Regression
Exponenaal
Smoothing
Robust
Regression
Factor
Analysis
Seemingly
Unrelated
Regression
(SUR)
Factor
Regression
Simultaneous
Equaaons
Gamma
Regression
Spaaal
Autoregression
(SAR)
GARCH(1,1)
Survival
Model
Geographically
Weighted
Regression
TARCH(1)
Hidden
Markov
Model
Threshold
Autoregression
(TAR)
Hierarchical
Bayes
Topic
Model
Inverse
Gaussian
Regression
Variable
Selecaon
Kriging
Vector
Autoregression,
VAR(1)
LASSO
Weighted
Regression
Latent
Dirichlet
Allocaaon
(LDA)
Zero-‐Inflated
Poisson
(ZIP)
௨ୗ䛻䛭䜜䛮䜜䛾䝰䝕䝹䛾グ㏙䛒䜚
hHp://www.icesi.edu.co/CRAN/web/packages/LaplacesDemon/vigneHes/Examples.pdf
24. 結果の表示
Fit
Call:
LaplacesDemon(Model
=
Model,
Data
=
MyData,
Iniaal.Values
=
Iniaal.Values,
Covar
=
NULL,
Iteraaons
=
2000,
Status
=
100,
Thinning
=
2,
Algorithm
=
AMWG,
Specs
=
list(Periodicity
=
10))
Acceptance
Rate:
0.43832
(0.15
0.5䛜Ⰻዲ)
Algorithm:
Adapave
Metropolis-‐within-‐Gibbs
Covariance
Matrix:
(NOT
SHOWN
HERE;
diagonal
shown
instead)
beta[1]
beta[2]
beta[3]
beta[4]
beta[5]
beta[6]
beta[7]
beta[8]
beta[9]
0.1972066
0.6575873
0.8393190
0.4202711
0.8110877
0.3030530
1.1820196
0.5230650
1.4233538
beta[10]
log.sigma
0.5077560
0.4703056
Covariance
(Diagonal)
History:
(NOT
SHOWN
HERE)
Deviance
Informaaon
Criterion
(DIC):
All
Staaonary
Dbar
83.312
81.464
pD
110.028
9.120
DIC
193.340
90.584
Iniaal
Values:
[1]
0
0
0
0
0
0
0
0
0
0
0
25. 結果の表示
Consort㛵ᩘ䛷☜ㄆ
Consort(Fit):
Fit䛾⤖ᯝ+
Iteraaons:
2000
Log(Marginal
Likelihood):
NA
Minutes
of
run-‐ame:
0.13
Model:
(NOT
SHOWN
HERE)
Monitor:
(NOT
SHOWN
HERE)
Parameters
(Number
of):
11
Posterior1:
(NOT
SHOWN
HERE)
Posterior2:
(NOT
SHOWN
HERE)
Recommended
Burn-‐In
of
Thinned
Samples:
800
Recommended
Burn-‐In
of
Un-‐thinned
Samples:
1600
Recommended
Thinning:
40
Specs:
(NOT
SHOWN
HERE)
Status
is
displayed
every
100
iteraaons
Summary1:
(SHOWN
BELOW)
Summary2:
(SHOWN
BELOW)
Thinned
Samples:
1000
Thinning:
2
Demonic
Suggesaon(ḟ㡯䛷ヲ⣽
Summary
of
all
samples
㻹㼑㼍㼚 㻿㻰 㻹㻯㻿㻱 㻱㻿㻿 㻸㻮 㻹㼑㼐㼕㼍㼚 㼁㻮
㼎㼑㼠㼍㼇㻝㼉 㻡㻚㻜㻝 㻜㻚㻞㻣 㻜㻚㻜㻟 㻝㻠㻥㻚㻟㻣 㻠㻚㻤㻜 㻡㻚㻜㻠 㻡㻚㻞㻡
㼎㼑㼠㼍㼇㻞㼉 㻙㻜㻚㻠㻝 㻜㻚㻠㻞 㻜㻚㻜㻢 㻤㻞㻚㻞㻜 㻙㻝㻚㻝㻞 㻙㻜㻚㻠㻝 㻜㻚㻠㻝
㼎㼑㼠㼍㼇㻟㼉 㻙㻜㻚㻞㻣 㻜㻚㻥㻠 㻜㻚㻞㻢 㻝㻟㻚㻢㻝 㻙㻞㻚㻝㻡 㻙㻜㻚㻞㻥 㻝㻚㻢㻝
㼎㼑㼠㼍㼇㻠㼉 㻙㻜㻚㻝㻤 㻜㻚㻣㻝 㻜㻚㻝㻢 㻟㻡㻚㻠㻟 㻙㻝㻚㻡㻤 㻙㻜㻚㻝㻣 㻝㻚㻞㻠
㼎㼑㼠㼍㼇㻡㼉 㻙㻜㻚㻟㻣 㻜㻚㻡㻝 㻜㻚㻝㻞 㻞㻣㻚㻟㻢 㻙㻝㻚㻟㻥 㻙㻜㻚㻟㻣 㻜㻚㻢㻝
㼎㼑㼠㼍㼇㻢㼉 㻙㻜㻚㻠㻤 㻜㻚㻟㻞 㻜㻚㻜㻠 㻝㻞㻟㻚㻞㻞 㻙㻝㻚㻜㻥 㻙㻜㻚㻠㻣 㻜㻚㻝㻞
㼎㼑㼠㼍㼇㻣㼉 㻞㻚㻞㻜 㻜㻚㻡㻥 㻜㻚㻝㻟 㻟㻤㻚㻝㻢 㻜㻚㻥㻥 㻞㻚㻞㻠 㻟㻚㻟㻞
㼎㼑㼠㼍㼇㻤㼉 㻜㻚㻡㻣 㻜㻚㻠㻣 㻜㻚㻜㻥 㻢㻟㻚㻞㻤 㻙㻜㻚㻟㻡 㻜㻚㻡㻤 㻝㻚㻡㻝
㼎㼑㼠㼍㼇㻥㼉 㻙㻜㻚㻝㻠 㻜㻚㻢㻜 㻜㻚㻝㻝 㻠㻝㻚㻢㻜 㻙㻝㻚㻠㻟 㻙㻜㻚㻝㻜 㻜㻚㻥㻝
㼎㼑㼠㼍㼇㻝㻜㼉 㻝㻚㻡㻞 㻜㻚㻣㻠 㻜㻚㻝㻥 㻞㻟㻚㻞㻤 㻙㻜㻚㻝㻜 㻝㻚㻠㻢 㻞㻚㻥㻟
㼘㼛㼓㻚㼟㼕㼓㼙㼍 㻙㻜㻚㻟㻢 㻜㻚㻝㻣 㻜㻚㻜㻞 㻝㻞㻞㻚㻤㻞 㻙㻜㻚㻢㻜 㻙㻜㻚㻟㻣 㻙㻜㻚㻜㻡
㻰㼑㼢㼕㼍㼚㼏㼑 㻤㻟㻚㻟㻝 㻝㻠㻚㻤㻟 㻞㻚㻜㻤 㻝㻜㻠㻚㻟㻜 㻣㻟㻚㻣㻡 㻤㻝㻚㻟㻤 㻥㻤㻚㻞㻜
㻸㻼 㻙㻤㻥㻚㻜㻣 㻣㻚㻠㻞 㻝㻚㻜㻠 㻝㻜㻠㻚㻞㻥 㻙㻥㻢㻚㻡㻞 㻙㻤㻤㻚㻝㻝 㻙㻤㻠㻚㻞㻥
㼟㼕㼓㼙㼍 㻜㻚㻣㻝 㻜㻚㻝㻣 㻜㻚㻜㻞 㻤㻢㻚㻝㻡 㻜㻚㻡㻡 㻜㻚㻢㻥 㻜㻚㻥㻡
Summary
of
staaonary
samples
㻹㼑㼍㼚 㻿㻰 㻹㻯㻿㻱 㻱㻿㻿 㻸㻮 㻹㼑㼐㼕㼍㼚 㼁㻮
㼎㼑㼠㼍㼇㻝㼉 㻡㻚㻜㻟 㻜㻚㻝㻝 㻜㻚㻜㻝 㻤㻥㻚㻥㻢 㻠㻚㻤㻜 㻡㻚㻜㻞 㻡㻚㻞㻠
㼎㼑㼠㼍㼇㻞㼉 㻙㻜㻚㻠㻤 㻜㻚㻠㻜 㻜㻚㻝㻜 㻞㻠㻚㻥㻣 㻙㻝㻚㻝㻜 㻙㻜㻚㻡㻝 㻜㻚㻠㻟
㼎㼑㼠㼍㼇㻟㼉 㻙㻜㻚㻠㻟 㻜㻚㻢㻤 㻜㻚㻞㻢 㻣㻚㻜㻥 㻙㻞㻚㻠㻞 㻙㻜㻚㻟㻟 㻜㻚㻤㻞
㼎㼑㼠㼍㼇㻠㼉 㻙㻜㻚㻝㻡 㻜㻚㻢㻡 㻜㻚㻞㻟 㻥㻚㻜㻜 㻙㻝㻚㻟㻡 㻙㻜㻚㻞㻟 㻝㻚㻜㻟
㼎㼑㼠㼍㼇㻡㼉 㻙㻜㻚㻡㻜 㻜㻚㻟㻟 㻜㻚㻜㻢 㻟㻝㻚㻟㻡 㻙㻝㻚㻞㻝 㻙㻜㻚㻠㻡 㻜㻚㻝㻞
㼎㼑㼠㼍㼇㻢㼉 㻙㻜㻚㻠㻟 㻜㻚㻞㻠 㻜㻚㻜㻠 㻢㻠㻚㻠㻝 㻙㻜㻚㻥㻟 㻙㻜㻚㻠㻠 㻙㻜㻚㻜㻡
㼎㼑㼠㼍㼇㻣㼉 㻞㻚㻝㻟 㻜㻚㻠㻣 㻜㻚㻝㻣 㻝㻡㻚㻝㻞 㻝㻚㻝㻥 㻞㻚㻝㻜 㻞㻚㻥㻢
㼎㼑㼠㼍㼇㻤㼉 㻜㻚㻡㻤 㻜㻚㻟㻥 㻜㻚㻝㻜 㻞㻞㻚㻢㻞 㻙㻜㻚㻝㻝 㻜㻚㻡㻝 㻝㻚㻢㻢
㼎㼑㼠㼍㼇㻥㼉 㻜㻚㻜㻞 㻜㻚㻠㻡 㻜㻚㻝㻢 㻝㻡㻚㻞㻡 㻙㻜㻚㻤㻜 㻜㻚㻜㻟 㻜㻚㻤㻣
㼎㼑㼠㼍㼇㻝㻜㼉 㻝㻚㻣㻟 㻜㻚㻠㻡 㻜㻚㻝㻠 㻝㻢㻚㻤㻝 㻜㻚㻥㻥 㻝㻚㻢㻥 㻞㻚㻡㻣
㼘㼛㼓㻚㼟㼕㼓㼙㼍 㻙㻜㻚㻟㻣 㻜㻚㻝㻞 㻜㻚㻜㻞 㻢㻠㻚㻟㻤 㻙㻜㻚㻡㻣 㻙㻜㻚㻟㻥 㻙㻜㻚㻝㻞
㻰㼑㼢㼕㼍㼚㼏㼑 㻤㻝㻚㻠㻢 㻠㻚㻞㻣 㻜㻚㻣㻞 㻢㻝㻚㻟㻝 㻣㻠㻚㻡㻢 㻤㻝㻚㻢㻠 㻥㻝㻚㻥㻟
㻸㻼 㻙㻤㻤㻚㻝㻡 㻞㻚㻝㻠 㻝㻚㻜㻠 㻢㻝㻚㻞㻠 㻙㻥㻟㻚㻟㻤 㻙㻤㻤㻚㻞㻠 㻙㻤㻠㻚㻣㻜
㼟㼕㼓㼙㼍 㻜㻚㻢㻥 㻜㻚㻜㻤 㻜㻚㻜㻞 㻢㻟㻚㻞㻜 㻜㻚㻡㻢 㻜㻚㻢㻤 㻜㻚㻤㻤
26. Demonic Suggestion(悪魔の囁き)
Consort(XX)㛵ᩘ䛷☜ㄆ
Demonic
Suggesaon䝅䝇䝔䝮䛷᮰䛧䛯䛛䜢☜ㄆ䛧䛶䛟䜜䜛
௨ୗ5䛴䛾㡯┠䛛䜙᮰デ᩿
1. 䛳䛯MCMC䜰䝹䝂䝸䝈䝮
2. Acceptance
rate
(0.15-‐0.5䛾⠊ᅖෆ䛜ᮃ䜎䛧䛔)
3. Montecarlo
standard
error
(
6.27䛜ᮃ䜎䛧䛔)
4. Effecave
sample
size
(ESS)
(100௨ୖ䛷䛒䜛䛣䛸)
5. ఱ䝃䞁䝥䝹┠䛛䜙ᐃᖖศᕸ䛻䛺䛳䛯䛛
27. 悪魔の囁き(モデル最適化)
Consort(Fit)
␎
Demonic
Suggesaon
Due
to
the
combinaaon
of
the
following
condiaons,
1.
Adapave
Metropolis-‐within-‐Gibbs
2.
The
acceptance
rate
(0.4383182)
is
within
the
interval
[0.15,0.5]
3.
At
least
one
target
MCSE
is
=
6.27%
of
its
marginal
posterior
standard
deviaaon.
4.
At
least
one
target
distribuaon
has
an
effecave
sample
size
(ESS)
less
than
100.
The
worst
mixing
chain
is:
beta[3]
(ESS=13.60738).
5.
Each
target
distribuaon
became
staaonary
by
801
iteraaons.
୰␎
Laplace's
Demon
has
not
been
appeased,
and
suggests
copy/pasang
the
following
R
code
into
the
R
console,
and
running
it.
Iniaal.Values
-‐
as.iniaal.values(Fit)
Fit
-‐
LaplacesDemon(Model,
Data=MyData,
Iniaal.Values,
Covar=NULL,
Iteraaons=40000,
Status=200,
Thinning=40,
Algorithm=CHARM,
Specs=NULL)
次にどういう設定で回せばいいかもついてくる
28. 囁きに従った結果1
Fit2
Call:
LaplacesDemon(Model
=
Model,
Data
=
MyData,
Iniaal.Values
=
Iniaal.Values,
Covar
=
NULL,
Iteraaons
=
40000,
Status
=
200,
Thinning
=
40,
Algorithm
=
CHARM,
Specs
=
NULL)
Acceptance
Rate:
0.76534
(0.15
0.5䛜Ⰻዲ)
Algorithm:
Componentwise
Hit-‐And-‐Run
Metropolis
Covariance
Matrix:
(NOT
SHOWN
HERE;
diagonal
shown
instead)
beta[1]
beta[2]
beta[3]
beta[4]
beta[5]
beta[6]
beta[7]
beta[8]
0.01240755
0.14417120
0.93733150
0.54973268
0.25152772
0.08115080
0.27561874
0.18940992
beta[9]
beta[10]
log.sigma
0.30695378
0.57556990
0.01964738
Covariance
(Diagonal)
History:
(NOT
SHOWN
HERE)
Deviance
Informaaon
Criterion
(DIC):
All
Staaonary
Dbar
82.246
82.246
pD
17.579
17.579
DIC
99.825
99.825
Iniaal
Values:
[1]
4.9974512
-‐0.1382687
-‐1.4346129
0.8539510
-‐1.1331011
-‐0.4643098
1.8609538
0.5793896
[9]
-‐0.6495910
2.3852500
-‐0.2692097
29. 悪魔の囁き2(モデル最適化)
Consort(Fit2)
␎
Demonic
Suggesaon
Due
to
the
combinaaon
of
the
following
condiaons,
1.
Componentwise
Hit-‐And-‐Run
Metropolis
2.
The
acceptance
rate
(0.7638741)
is
above
0.5.
3.
At
least
one
target
MCSE
is
=
6.27%
of
its
marginal
posteriorstandard
deviaaon.
4.
Each
target
distribuaon
has
an
effecave
sample
size
(ESS)
of
at
least
100.
5.
Each
target
distribuaon
became
staaonary
by
1
iteraaon.
୰␎
Laplace's
Demon
has
not
been
appeased,
and
suggests
copy/pasang
the
following
R
code
into
the
R
console,
and
running
it.
Iniaal.Values
-‐
as.iniaal.values(Fit)
Fit
-‐
LaplacesDemon(Model,
Data=MyData,
Iniaal.Values,
Covar=NULL,
Iteraaons=364000,
Status=94545,
Thinning=364,
Algorithm=CHARM,
Specs=list(alpha.star=0.44))
まだ基準を満たさないからおかわり
30. 囁きに従った結果2
Fit3
Call:
LaplacesDemon(Model
=
Model,
Data
=
MyData,
Iniaal.Values
=
Iniaal.Values,
Covar
=
NULL,
Iteraaons
=
364000,
Status
=
94545,
Thinning
=
364,
Algorithm
=
CHARM,
Specs
=
list(alpha.star
=
0.44))
Acceptance
Rate:
0.43995
Algorithm:
Componentwise
Hit-‐And-‐Run
Metropolis
Covariance
Matrix:
(NOT
SHOWN
HERE;
diagonal
shown
instead)
beta[1]
beta[2]
beta[3]
beta[4]
beta[5]
beta[6]
beta[7]
beta[8]
0.01358412
0.13479294
0.90407192
0.50149095
0.27029404
0.08264718
0.29542627
0.17658442
beta[9]
beta[10]
log.sigma
0.31023915
0.59911801
0.01874272
Covariance
(Diagonal)
History:
(NOT
SHOWN
HERE)
Deviance
Informaaon
Criterion
(DIC):
All
Staaonary
Dbar
82.246
82.246
pD
15.478
15.478
DIC
97.724
97.724
Iniaal
Values:
[1]
5.139087920
0.138937856
-‐1.348943603
0.002571293
-‐1.075622908
-‐0.481418573
2.260021886
[8]
0.211831011
0.013342143
2.442091371
-‐0.359196954
31. 悪魔の囁き3(モデル最適化)
Consort(Fit3)
␎
Demonic
Suggesaon
Due
to
the
combinaaon
of
the
following
condiaons,
1.
Componentwise
Hit-‐And-‐Run
Metropolis
2.
The
acceptance
rate
(0.4399505)
is
within
the
interval
[0.15,0.5].
#ᇶ‽ෆ䛻䛿䛔䛳䛯
3.
Each
target
MCSE
is
6.27%
of
its
marginal
posterior
standard
deviaaon.
#SD䛜ᇶ‽ෆ
4.
Each
target
distribuaon
has
an
effecave
sample
size
(ESS)
of
at
least
100.
#༑ศ䝃䞁䝥䝸䞁䜾䛥䜜䛯
5.
Each
target
distribuaon
became
staaonary
by
1
iteraaon.
#᭱ึ䛛䜙ᐃᖖศᕸ
Laplace's
Demon
has
been
appeased,
and
suggests
the
marginal
posterior
samples
should
be
ploHed
and
subjected
to
any
other
MCMC
diagnosac
deemed
fit
before
using
these
samples
for
inference.
Laplace's
Demon
is
finished
consorang.
基準を満たせばそれ以上囁かれない
35. 結果確認 (posterior
predicave
check)
Pred
-‐
predict(Fit,
Model,
MyData,
CPUs=1)
summary(Pred,
Discrep=Chi-‐Square)
Bayesian
Predicave
Informaaon
Criterion:
Dbar
pD
BPIC
81.464
9.12
99.704
Concordance:
0.9487179
Discrepancy
Staasac:
23.848
L-‐criterion:
36.25,
S.L:
0.32
Concordance:
ᐇ ್䛜ண ್䛾95%ಙ㢗༊㛫ෆ䛻䛒䛳䛯ᅇᩘ䛾ྜ
Discrepancy
Staasac:
ᙜ䛶䛿䜎䜚ᝏ䛔䛸್䛝䛔
L-‐criterion:
ᐇ 䛸ண ್䛜㏆䛔䛸ᑠ䛥䛔್䛻
Bayesian
Predicave
Informaaon
Criterion:
≉ᐃ䛾family䛾☜⋡ศᕸ䛻┿䛾ศᕸ䛜ྵ䜎䜜䛶䛔䛺䛔䛻䚸ண䛥䜜䜛ศᕸ䛾ᑐ
ᩘᑬᗘ䛾ᚋᖹᆒ䜢᥎ 䛩䜛䛯䜑䛻䛖…䜙䛧䛔
36. 結果確認 (posterior
predicave
check)
plot(Pred,
Style=Covariates,
Data=MyData) plot(Pred,
Style=Density,
Rows=1:9)
Posterior
predic9veを図で確認
共変量・事後密度などを使用
37. 結果確認(変数重要度)
Importance(Fit,
Model,
MyData,
Discrep=Chi-‐Square)
BPIC
Concordance
Discrep
L-‐criterion
Full
99.704
0.949
25.916
36.12
X[,-‐1]
253266.467
0
1713.563
198.796
X[,-‐2]
151.015
0.974
23.364
37.656
X[,-‐3]
467.209
0.974
24.068
39.14
X[,-‐4]
200.881
0.949
22.644
37.693
X[,-‐5]
147.276
0.974
26.614
37.343
X[,-‐6]
109.596
0.949
28.831
37.282
X[,-‐7]
1758.897
0.872
82.402
49.687
X[,-‐8]
175.367
0.949
30.726
36.935
X[,-‐9]
121.804
0.974
25.534
36.761
X[,-‐10]
1172.784
0.872
59.493
45.012
ኚᩘ䜢ྲྀ䜚㝖䛔䛯㝿䛾┠ⓗኚᩘ䛾᥎ᐃ್䛸ᐇ ್䛸䛪䜜䛾㔞䛛䜙Importance䜢ィ⟬
43. 䚷LaplacesDemon.hpc
高速化検討
LaplacesDemon.hpc(Model,
Data,
Iniaal.Values,
Covar=NULL,
Iteraaons=X,
Status=X,
Thinning=X,
Algorithm=”X,
Specs=NULL,
Chains=X,
CPUs=X,
Packages=NULL,
Dyn.libs=NULL)
Chain:
並列計算させる数
CPUs:
使うCPUの数
Packages:
パッケージに依存してModelが作られてる時に必要
Dyn.libs:
DLL依存でchainに何かしら工夫したいときに必要
が、何故か8コア使ったのに12.6分…
並列処理に変換するだけで3分近く消費
より大規模なデータならご利益がある?
誰か検証してみてください