SlideShare une entreprise Scribd logo
1  sur  150
Télécharger pour lire hors ligne
Relevant statistics for (approximate) Bayesian
model choice
Christian P. Robert
Universit´ Paris-Dauphine, Paris & University of Warwick, Coventry
e

bayesianstatistics@gmail.com
a meeting of potential interest?!

MCMSki IV to be held in Chamonix Mt Blanc, France, from
Monday, Jan. 6 to Wed., Jan. 8 (and BNP workshop on
Jan. 9), 2014
All aspects of MCMC++ theory and methodology and applications
and more! All posters welcomed!

Website http://www.pages.drexel.edu/∼mwl25/mcmski/
Outline

Intractable likelihoods
ABC methods
ABC as an inference machine
ABC for model choice
Model choice consistency
Conclusion and perspectives
Intractable likelihood

Case of a well-defined statistical model where the likelihood
function
(θ|y) = f (y1 , . . . , yn |θ)
is (really!) not available in closed form
cannot (easily!) be either completed or demarginalised
cannot be (at all!) estimated by an unbiased estimator
c Prohibits direct implementation of a generic MCMC algorithm
like Metropolis–Hastings
Intractable likelihood

Case of a well-defined statistical model where the likelihood
function
(θ|y) = f (y1 , . . . , yn |θ)
is (really!) not available in closed form
cannot (easily!) be either completed or demarginalised
cannot be (at all!) estimated by an unbiased estimator
c Prohibits direct implementation of a generic MCMC algorithm
like Metropolis–Hastings
Getting approximative

Case of a well-defined statistical model where the likelihood
function
(θ|y) = f (y1 , . . . , yn |θ)
is out of reach

Empirical approximations to the original B problem
Degrading the data precision down to a tolerance ε
Replacing the likelihood with a non-parametric approximation
Summarising/replacing the data with insufficient statistics
Getting approximative

Case of a well-defined statistical model where the likelihood
function
(θ|y) = f (y1 , . . . , yn |θ)
is out of reach

Empirical approximations to the original B problem
Degrading the data precision down to a tolerance ε
Replacing the likelihood with a non-parametric approximation
Summarising/replacing the data with insufficient statistics
Getting approximative

Case of a well-defined statistical model where the likelihood
function
(θ|y) = f (y1 , . . . , yn |θ)
is out of reach

Empirical approximations to the original B problem
Degrading the data precision down to a tolerance ε
Replacing the likelihood with a non-parametric approximation
Summarising/replacing the data with insufficient statistics
Different [levels of] worries about abc

Impact on B inference
a mere computational issue (that will eventually end up being
solved by more powerful computers, &tc, even if too costly in
the short term, as for regular Monte Carlo methods) Not!
an inferential issue (opening opportunities for new inference
machine on complex/Big Data models, with legitimity
differing from classical B approach)
a Bayesian conundrum (how closely related to the/a B
approach? more than recycling B tools? true B with raw
data? a new type of empirical B inference?)
Different [levels of] worries about abc

Impact on B inference
a mere computational issue (that will eventually end up being
solved by more powerful computers, &tc, even if too costly in
the short term, as for regular Monte Carlo methods) Not!
an inferential issue (opening opportunities for new inference
machine on complex/Big Data models, with legitimity
differing from classical B approach)
a Bayesian conundrum (how closely related to the/a B
approach? more than recycling B tools? true B with raw
data? a new type of empirical B inference?)
Different [levels of] worries about abc

Impact on B inference
a mere computational issue (that will eventually end up being
solved by more powerful computers, &tc, even if too costly in
the short term, as for regular Monte Carlo methods) Not!
an inferential issue (opening opportunities for new inference
machine on complex/Big Data models, with legitimity
differing from classical B approach)
a Bayesian conundrum (how closely related to the/a B
approach? more than recycling B tools? true B with raw
data? a new type of empirical B inference?)
Approximate Bayesian computation
Intractable likelihoods
ABC methods
Genesis of ABC
ABC basics
Advances and interpretations
ABC as knn
ABC as an inference machine
ABC for model choice
Model choice consistency
Conclusion and perspectives
Genetic background of ABC

skip genetics

ABC is a recent computational technique that only requires being
able to sample from the likelihood f (·|θ)
This technique stemmed from population genetics models, about
15 years ago, and population geneticists still contribute
significantly to methodological developments of ABC.
[Griffith & al., 1997; Tavar´ & al., 1999]
e
Demo-genetic inference

Each model is characterized by a set of parameters θ that cover
historical (time divergence, admixture time ...), demographics
(population sizes, admixture rates, migration rates, ...) and genetic
(mutation rate, ...) factors
The goal is to estimate these parameters from a dataset of
polymorphism (DNA sample) y observed at the present time

Problem:
most of the time, we cannot calculate the likelihood of the
polymorphism data f (y|θ)...
Demo-genetic inference

Each model is characterized by a set of parameters θ that cover
historical (time divergence, admixture time ...), demographics
(population sizes, admixture rates, migration rates, ...) and genetic
(mutation rate, ...) factors
The goal is to estimate these parameters from a dataset of
polymorphism (DNA sample) y observed at the present time

Problem:
most of the time, we cannot calculate the likelihood of the
polymorphism data f (y|θ)...
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium

Sample of 8 genes

Mutations according to
the Simple stepwise
Mutation Model
(SMM)
• date of the mutations ∼
Poisson process with
intensity θ/2 over the
branches
• MRCA = 100
• independent mutations:
±1 with pr. 1/2
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
Kingman’s genealogy
When time axis is
normalized,
T (k) ∼ Exp(k(k − 1)/2)
Mutations according to
the Simple stepwise
Mutation Model
(SMM)
• date of the mutations ∼
Poisson process with
intensity θ/2 over the
branches
• MRCA = 100
• independent mutations:
±1 with pr. 1/2
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
Kingman’s genealogy
When time axis is
normalized,
T (k) ∼ Exp(k(k − 1)/2)
Mutations according to
the Simple stepwise
Mutation Model
(SMM)
• date of the mutations ∼
Poisson process with
intensity θ/2 over the
branches
• MRCA = 100
• independent mutations:
±1 with pr. 1/2
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
Kingman’s genealogy
When time axis is
normalized,
T (k) ∼ Exp(k(k − 1)/2)

Observations: leafs of the tree
^
θ=?

Mutations according to
the Simple stepwise
Mutation Model
(SMM)
• date of the mutations ∼
Poisson process with
intensity θ/2 over the
branches
• MRCA = 100
• independent mutations:
±1 with pr. 1/2
Much more interesting models. . .
several independent locus
Independent gene genealogies and mutations
different populations
linked by an evolutionary scenario made of divergences,
admixtures, migrations between populations, selection
pressure, etc.
larger sample size
usually between 50 and 100 genes
MRCA
τ2
τ1

A typical evolutionary scenario:

POP 0

POP 1

POP 2
Instance of ecological questions

How the Asian ladybird
beetle arrived in Europe?
Why does they swarm right
now?
What are the routes of
invasion?
How to get rid of them?
Why did the chicken cross
the road?
[Lombaert & al., 2010, PLoS ONE]
Worldwide invasion routes of Harmonia axyridis

Worldwide routes of invasion of Harmonia axyridis
For each outbreak, the arrow indicates the most likely invasion
pathway and the associated posterior probability, with 95% credible
intervals in brackets
[Estoup et al., 2012, Molecular Ecology Res.]
c Intractable likelihood

Missing (too much missing!) data structure:
f (y|θ) =
G

f (y|G , θ)f (G |θ)dG

cannot be computed in a manageable way...
[Stephens & Donnelly, 2000]
The genealogies are considered as nuisance parameters
This modelling clearly differs from the phylogenetic perspective
where the tree is the parameter of interest.
c Intractable likelihood

Missing (too much missing!) data structure:
f (y|θ) =
G

f (y|G , θ)f (G |θ)dG

cannot be computed in a manageable way...
[Stephens & Donnelly, 2000]
The genealogies are considered as nuisance parameters
This modelling clearly differs from the phylogenetic perspective
where the tree is the parameter of interest.
A?B?C?

A stands for approximate
[wrong likelihood / pic?]
B stands for Bayesian
C stands for computation
[producing a parameter
sample]
A?B?C?

A stands for approximate
[wrong likelihood / pic?]
B stands for Bayesian
C stands for computation
[producing a parameter
sample]
A?B?C?

ESS=75.93

ESS=76.87

0.2

0.5

−0.2

−0.2

0.2

0.6

0.0

0.2

0.0

0.2

0.4

0.6

3

4

θ
ESS=83.77

Density

0
0.0

0.2

0.4

0.6

−0.6

−0.4

θ
ESS=92.42

−0.2

0.0

0.2

0.4

0.4

0.6

3.0

θ
ESS=95.01
Density

−0.4

0.0

0.2

0.4

0.6

−0.4

θ
ESS=99.33

0.0

0.2

Density

3

θ
ESS=87.28

0

Density

2.0
1.0
−0.6

−0.2

θ
ESS=85.13

0.0

Density

2.0
1.0

0.0
θ
ESS=139.2

3

0.6

2.0

Density

−0.4

0.0

−0.5

2

Density

0
0.4

0.0
1.0
0.0 1.0 2.0 3.0

0.5

0.0 1.0 2.0 3.0

0.0

θ
ESS=155.7
Density

0.2

1.0

2.0
1.0

Density

0.0
−0.5

Density

0.0

θ
ESS=96.31

0.0

C stands for computation
[producing a parameter
sample]

−0.4

Density
−0.4

θ
ESS=149.1

B stands for Bayesian

0.4

2

0.0

0.2

1

−0.2

0.0

θ
ESS=108.4

1.5

Density
−0.4

−0.2

0.0 1.0 2.0 3.0

−0.4

2

4
3
2
1
−0.6

1

2.0

1.0

0

Density

0.5

1

0.0

θ
ESS=91.54

0.0 1.0 2.0 3.0

−0.5

A stands for approximate
[wrong likelihood / pic?]

1.0

Density

0.0

1.0
0.0

Density

4

ESS=155.6

−0.4

−0.2

0.0

0.2

0.4

−0.2

0.0

0.2

0.4

0.6
How Bayesian is aBc?

Could we turn the resolution into a Bayesian answer?
ideally so (not meaningfull: requires ∞-ly powerful computer
approximation error unknown (w/o costly simulation)
true B for wrong model (formal and artificial)
true B for noisy model (much more convincing)
true B for estimated likelihood (back to econometrics?)
illuminating the tension between information and precision
ABC methodology
Bayesian setting: target is π(θ)f (x|θ)
When likelihood f (x|θ) not in closed form, likelihood-free rejection
technique:

Foundation
For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps
jointly simulating
θ ∼ π(θ) , z ∼ f (z|θ ) ,
until the auxiliary variable z is equal to the observed value, z = y,
then the selected
θ ∼ π(θ|y)
[Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997]
e
ABC methodology
Bayesian setting: target is π(θ)f (x|θ)
When likelihood f (x|θ) not in closed form, likelihood-free rejection
technique:

Foundation
For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps
jointly simulating
θ ∼ π(θ) , z ∼ f (z|θ ) ,
until the auxiliary variable z is equal to the observed value, z = y,
then the selected
θ ∼ π(θ|y)
[Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997]
e
ABC methodology
Bayesian setting: target is π(θ)f (x|θ)
When likelihood f (x|θ) not in closed form, likelihood-free rejection
technique:

Foundation
For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps
jointly simulating
θ ∼ π(θ) , z ∼ f (z|θ ) ,
until the auxiliary variable z is equal to the observed value, z = y,
then the selected
θ ∼ π(θ|y)
[Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997]
e
A as A...pproximative

When y is a continuous random variable, strict equality z = y is
replaced with a tolerance zone
ρ(y, z)
where ρ is a distance
Output distributed from
def

π(θ) Pθ {ρ(y, z) < } ∝ π(θ|ρ(y, z) < )
[Pritchard et al., 1999]
A as A...pproximative

When y is a continuous random variable, strict equality z = y is
replaced with a tolerance zone
ρ(y, z)
where ρ is a distance
Output distributed from
def

π(θ) Pθ {ρ(y, z) < } ∝ π(θ|ρ(y, z) < )
[Pritchard et al., 1999]
ABC algorithm

In most implementations, further degree of A...pproximation:
Algorithm 1 Likelihood-free rejection sampler
for i = 1 to N do
repeat
generate θ from the prior distribution π(·)
generate z from the likelihood f (·|θ )
until ρ{η(z), η(y)}
set θi = θ
end for
where η(y) defines a (not necessarily sufficient) statistic
Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
where A

,y

π(θ)f (z|θ)IA ,y (z)
,
A ,y ×Θ π(θ)f (z|θ)dzdθ

= {z ∈ D|ρ(η(z), η(y)) < }.

The idea behind ABC is that the summary statistics coupled with a
small tolerance should provide a good approximation of the
posterior distribution:
π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .
...does it?!
Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
where A

,y

π(θ)f (z|θ)IA ,y (z)
,
A ,y ×Θ π(θ)f (z|θ)dzdθ

= {z ∈ D|ρ(η(z), η(y)) < }.

The idea behind ABC is that the summary statistics coupled with a
small tolerance should provide a good approximation of the
posterior distribution:
π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .
...does it?!
Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
where A

,y

π(θ)f (z|θ)IA ,y (z)
,
A ,y ×Θ π(θ)f (z|θ)dzdθ

= {z ∈ D|ρ(η(z), η(y)) < }.

The idea behind ABC is that the summary statistics coupled with a
small tolerance should provide a good approximation of the
posterior distribution:
π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .
...does it?!
Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
where A

,y

π(θ)f (z|θ)IA ,y (z)
,
A ,y ×Θ π(θ)f (z|θ)dzdθ

= {z ∈ D|ρ(η(z), η(y)) < }.

The idea behind ABC is that the summary statistics coupled with a
small tolerance should provide a good approximation of the
restricted posterior distribution:
π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) .
Not so good..!
Comments

Role of distance paramount (because

= 0)

Scaling of components of η(y) is also determinant
matters little if “small enough”
representative of “curse of dimensionality”
small is beautiful!
the data as a whole may be paradoxically weakly informative
for ABC
curse of dimensionality in summaries

Consider that
the simulated summary statistics η(y1 ), . . . , η(yN )
the observed summary statistics η(yobs )
are iid with uniform distribution on [0, 1]d
Take δ∞ (d, N) = E
δ∞ (1, N)
δ∞ (2, N)
δ∞ (10, N)
δ∞ (200, N)

min

i=1,...,N

N = 100
0.0025
0.033
0.28
0.48

η(yobs ) − η(yi )

N = 1, 000
0.00025
0.01
0.22
0.48

∞

N = 10, 000
0.000025
0.0033
0.18
0.47

N = 100, 000
0.0000025
0.001
0.14
0.46
ABC (simul’) advances

how approximative is ABC?

ABC as knn

Simulating from the prior is often poor in efficiency
Either modify the proposal distribution on θ to increase the density
of x’s within the vicinity of y ...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
...or by viewing the problem as a conditional density estimation
and by developing techniques to allow for larger
[Beaumont et al., 2002]
.....or even by including

in the inferential framework [ABCµ ]
[Ratmann et al., 2009]
ABC (simul’) advances

how approximative is ABC?

ABC as knn

Simulating from the prior is often poor in efficiency
Either modify the proposal distribution on θ to increase the density
of x’s within the vicinity of y ...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
...or by viewing the problem as a conditional density estimation
and by developing techniques to allow for larger
[Beaumont et al., 2002]
.....or even by including

in the inferential framework [ABCµ ]
[Ratmann et al., 2009]
ABC (simul’) advances

how approximative is ABC?

ABC as knn

Simulating from the prior is often poor in efficiency
Either modify the proposal distribution on θ to increase the density
of x’s within the vicinity of y ...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
...or by viewing the problem as a conditional density estimation
and by developing techniques to allow for larger
[Beaumont et al., 2002]
.....or even by including

in the inferential framework [ABCµ ]
[Ratmann et al., 2009]
ABC (simul’) advances

how approximative is ABC?

ABC as knn

Simulating from the prior is often poor in efficiency
Either modify the proposal distribution on θ to increase the density
of x’s within the vicinity of y ...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
...or by viewing the problem as a conditional density estimation
and by developing techniques to allow for larger
[Beaumont et al., 2002]
.....or even by including

in the inferential framework [ABCµ ]
[Ratmann et al., 2009]
ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance
distances, say 10% or 1% quantile,
=

N

as a quantile on observed

= qα (d1 , . . . , dN )

Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸ois, 2010]
c
ABC is a k-nearest neighbour (knn) method with kN = N N
[Loftsgaarden & Quesenberry, 1965]
ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance
distances, say 10% or 1% quantile,
=

N

as a quantile on observed

= qα (d1 , . . . , dN )

Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸ois, 2010]
c
ABC is a k-nearest neighbour (knn) method with kN = N N
[Loftsgaarden & Quesenberry, 1965]
ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance
distances, say 10% or 1% quantile,
=

N

as a quantile on observed

= qα (d1 , . . . , dN )

Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸ois, 2010]
c
ABC is a k-nearest neighbour (knn) method with kN = N N
[Loftsgaarden & Quesenberry, 1965]
ABC consistency
Provided
kN / log log N −→ ∞ and kN /N −→ 0
as N → ∞, for almost all s0 (with respect to the distribution of
S), with probability 1,
1
kN

kN

ϕ(θj ) −→ E[ϕ(θj )|S = s0 ]
j=1

[Devroye, 1982]
Biau et al. (2013) also recall pointwise and integrated mean square error
consistency results on the corresponding kernel estimate of the
conditional posterior distribution, under constraints
kN → ∞,

kN /N → 0,

hN → 0

p
and hN kN → ∞,
ABC consistency
Provided
kN / log log N −→ ∞ and kN /N −→ 0
as N → ∞, for almost all s0 (with respect to the distribution of
S), with probability 1,
1
kN

kN

ϕ(θj ) −→ E[ϕ(θj )|S = s0 ]
j=1

[Devroye, 1982]
Biau et al. (2013) also recall pointwise and integrated mean square error
consistency results on the corresponding kernel estimate of the
conditional posterior distribution, under constraints
kN → ∞,

kN /N → 0,

hN → 0

p
and hN kN → ∞,
Rates of convergence

Further assumptions (on target and kernel) allow for precise
(integrated mean square) convergence rates (as a power of the
sample size N), derived from classical k-nearest neighbour
regression, like
4

when m = 1, 2, 3, kN ≈ N (p+4)/(p+8) and rate N − p+8
4

when m = 4, kN ≈ N (p+4)/(p+8) and rate N − p+8 log N
4

when m > 4, kN ≈ N (p+4)/(m+p+4) and rate N − m+p+4
[Biau et al., 2013]

Drag: Only applies to sufficient summary statistics
Rates of convergence

Further assumptions (on target and kernel) allow for precise
(integrated mean square) convergence rates (as a power of the
sample size N), derived from classical k-nearest neighbour
regression, like
4

when m = 1, 2, 3, kN ≈ N (p+4)/(p+8) and rate N − p+8
4

when m = 4, kN ≈ N (p+4)/(p+8) and rate N − p+8 log N
4

when m > 4, kN ≈ N (p+4)/(m+p+4) and rate N − m+p+4
[Biau et al., 2013]

Drag: Only applies to sufficient summary statistics
ABC inference machine
Intractable likelihoods
ABC methods
ABC as an inference machine
Error inc.
Exact BC and approximate
targets
summary statistic
ABC for model choice
Model choice consistency
Conclusion and perspectives
How Bayesian is aBc..?

may be a convergent method of inference (meaningful?
sufficient? foreign?)
approximation error unknown (w/o massive simulation)
pragmatic/empirical B (there is no other solution!)
many calibration issues (tolerance, distance, statistics)
the NP side should be incorporated into the whole B picture
the approximation error should also be part of the B inference
ABCµ

Idea Infer about the error as well as about the parameter:
Use of a joint density
f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( )
where y is the data, and ξ( |y, θ) is the prior predictive density of
ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
approximation.
[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
ABCµ

Idea Infer about the error as well as about the parameter:
Use of a joint density
f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( )
where y is the data, and ξ( |y, θ) is the prior predictive density of
ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
approximation.
[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
ABCµ

Idea Infer about the error as well as about the parameter:
Use of a joint density
f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( )
where y is the data, and ξ( |y, θ) is the prior predictive density of
ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
approximation.
[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
Wilkinson’s exact BC (not exactly!)
ABC approximation error (i.e. non-zero tolerance) replaced with
exact simulation from a controlled approximation to the target,
convolution of true posterior with kernel function
π (θ, z|y) =

π(θ)f (z|θ)K (y − z)
,
π(θ)f (z|θ)K (y − z)dzdθ

with K kernel parameterised by bandwidth .
[Wilkinson, 2013]

Theorem
The ABC algorithm based on the production of a randomised
˜
observation y = y + ξ, ξ ∼ K , and an acceptance probability of
K (y − z)/M
gives draws from the posterior distribution π(θ|y).
Wilkinson’s exact BC (not exactly!)
ABC approximation error (i.e. non-zero tolerance) replaced with
exact simulation from a controlled approximation to the target,
convolution of true posterior with kernel function
π (θ, z|y) =

π(θ)f (z|θ)K (y − z)
,
π(θ)f (z|θ)K (y − z)dzdθ

with K kernel parameterised by bandwidth .
[Wilkinson, 2013]

Theorem
The ABC algorithm based on the production of a randomised
˜
observation y = y + ξ, ξ ∼ K , and an acceptance probability of
K (y − z)/M
gives draws from the posterior distribution π(θ|y).
How exact a BC?
Pros
Pseudo-data from true model vs. observed data from noisy
model
Outcome is completely controlled
Link with ABCµ when assuming y observed with
measurement error with density K
Relates to the theory of model approximation
[Kennedy & O’Hagan, 2001]
leads to “noisy ABC”: just
do it!, i.e. perturbates the data down to precision ε and proceed
[Fearnhead & Prangle, 2012]
Cons
Requires K to be bounded by M
True approximation error never assessed
How exact a BC?
Pros
Pseudo-data from true model vs. observed data from noisy
model
Outcome is completely controlled
Link with ABCµ when assuming y observed with
measurement error with density K
Relates to the theory of model approximation
[Kennedy & O’Hagan, 2001]
leads to “noisy ABC”: just
do it!, i.e. perturbates the data down to precision ε and proceed
[Fearnhead & Prangle, 2012]
Cons
Requires K to be bounded by M
True approximation error never assessed
Which summary?

Fundamental difficulty of the choice of the summary statistic when
there is no non-trivial sufficient statistics
Loss of statistical information balanced against gain in data
roughening
Approximation error and information loss remain unknown
Choice of statistics induces choice of distance function
towards standardisation
may be imposed for external/practical reasons (e.g., DIYABC)
may gather several non-B point estimates
can learn about efficient combination
Which summary?

Fundamental difficulty of the choice of the summary statistic when
there is no non-trivial sufficient statistics
Loss of statistical information balanced against gain in data
roughening
Approximation error and information loss remain unknown
Choice of statistics induces choice of distance function
towards standardisation
may be imposed for external/practical reasons (e.g., DIYABC)
may gather several non-B point estimates
can learn about efficient combination
Which summary?

Fundamental difficulty of the choice of the summary statistic when
there is no non-trivial sufficient statistics
Loss of statistical information balanced against gain in data
roughening
Approximation error and information loss remain unknown
Choice of statistics induces choice of distance function
towards standardisation
may be imposed for external/practical reasons (e.g., DIYABC)
may gather several non-B point estimates
can learn about efficient combination
Semi-automatic ABC

Fearnhead and Prangle (2012) study ABC and the selection of the
summary statistic in close proximity to Wilkinson’s proposal
ABC considered as inferential method and calibrated as such
randomised (or ‘noisy’) version of the summary statistics
˜
η(y) = η(y) + τ
derivation of a well-calibrated version of ABC, i.e. an
algorithm that gives proper predictions for the distribution
associated with this randomised summary statistic
[weak property]
Summary [of F&P/statistics)
optimality of the posterior expectation
E[θ|y]
of the parameter of interest as summary statistics η(y)!
[requires iterative process]
use of the standard quadratic loss function
(θ − θ0 )T A(θ − θ0 )
recent extension to model choice, optimality of Bayes factor
B12 (y)
[F&P, ISBA 2012, Kyoto]
Summary [of F&P/statistics)
optimality of the posterior expectation
E[θ|y]
of the parameter of interest as summary statistics η(y)!
[requires iterative process]
use of the standard quadratic loss function
(θ − θ0 )T A(θ − θ0 )
recent extension to model choice, optimality of Bayes factor
B12 (y)
[F&P, ISBA 2012, Kyoto]
LDA summaries for model choice

In parallel to F& P semi-automatic ABC, selection of most
discriminant subvector out of a collection of summary statistics,
based on Linear Discriminant Analysis (LDA)
[Estoup & al., 2012, Mol. Ecol. Res.]
Solution now implemented in DIYABC.2
[Cornuet & al., 2008, Bioinf.; Estoup & al., 2013]
Implementation
Step 1: Rake a subset of the α% (e.g., 1%) best simulations
from an ABC reference table usually including 106 –109
simulations for each of the M compared scenarios/models
Selection based on normalized Euclidian distance computed
between observed and simulated raw summaries
Step 2: run LDA on this subset to transform summaries into
(M − 1) discriminant variables
When computing LDA functions, weight simulated data with an
Epanechnikov kernel
Step 3: Estimation of the posterior probabilities of each
competing scenario/model by polychotomous local logistic
regression against the M − 1 most discriminant variables
[Cornuet & al., 2008, Bioinformatics]
LDA advantages

much faster computation of scenario probabilities via
polychotomous regression
a (much) lower number of explanatory variables improves the
accuracy of the ABC approximation, reduces the tolerance
and avoids extra costs in constructing the reference table
allows for a large collection of initial summaries
ability to evaluate Type I and Type II errors on more complex
models
LDA reduces correlation among explanatory variables
When available, using both simulated and real data sets, posterior
probabilities of scenarios computed from LDA-transformed and raw
summaries are strongly correlated
LDA advantages

much faster computation of scenario probabilities via
polychotomous regression
a (much) lower number of explanatory variables improves the
accuracy of the ABC approximation, reduces the tolerance
and avoids extra costs in constructing the reference table
allows for a large collection of initial summaries
ability to evaluate Type I and Type II errors on more complex
models
LDA reduces correlation among explanatory variables
When available, using both simulated and real data sets, posterior
probabilities of scenarios computed from LDA-transformed and raw
summaries are strongly correlated
ABC for model choice

Intractable likelihoods
ABC methods
ABC as an inference machine
ABC for model choice
BMC Principle
Gibbs random fields (counterexample)
Generic ABC model choice
Model choice consistency
Conclusion and perspectives
Bayesian model choice

BMC Principle
Several models
M1 , M2 , . . .
are considered simultaneously for dataset y and model index M
central to inference.
Use of
prior π(M = m), plus
prior distribution on the parameter conditional on the value m
of the model index, πm (θm )
Bayesian model choice

BMC Principle
Several models
M1 , M2 , . . .
are considered simultaneously for dataset y and model index M
central to inference.
Goal is to derive the posterior distribution of M,
π(M = m|data)
a challenging computational target when models are complex.
Generic ABC for model choice

Algorithm 2 Likelihood-free model choice sampler (ABC-MC)
for t = 1 to T do
repeat
Generate m from the prior π(M = m)
Generate θm from the prior πm (θm )
Generate z from the model fm (z|θm )
until ρ{η(z), η(y)} <
Set m(t) = m and θ(t) = θm
end for
[Grelaud & al., 2009; Toni & al., 2009]
ABC estimates

Posterior probability π(M = m|y) approximated by the frequency
of acceptances from model m
1
T

T

Im(t) =m .
t=1
ABC estimates

Posterior probability π(M = m|y) approximated by the frequency
of acceptances from model m
1
T

T

Im(t) =m .
t=1

Extension to a weighted polychotomous logistic regression
estimate of π(M = m|y), with non-parametric kernel weights
[Cornuet et al., DIYABC, 2009]
Potts model
Skip MRFs

Potts model
Distribution with an energy function of the form
θS(y) = θ

δyl =yi
l∼i

where l∼i denotes a neighbourhood structure
In most realistic settings, summation
Zθ =

exp{θS(x)}
x∈X

involves too many terms to be manageable and numerical
approximations cannot always be trusted
Potts model
Skip MRFs

Potts model
Distribution with an energy function of the form
θS(y) = θ

δyl =yi
l∼i

where l∼i denotes a neighbourhood structure
In most realistic settings, summation
Zθ =

exp{θS(x)}
x∈X

involves too many terms to be manageable and numerical
approximations cannot always be trusted
Neighbourhood relations

Setup
Choice to be made between M neighbourhood relations
m

i ∼i

m

(0

M − 1)

with
Sm (x) =

I{xi =xi

}

m

i ∼i

driven by the posterior probabilities of the models.
Model index

Computational target:
P(M = m|x) ∝

fm (x|θm )πm (θm ) dθm π(M = m)
Θm

If S(x) sufficient statistic for the joint parameters
(M, θ0 , . . . , θM−1 ),
P(M = m|x) = P(M = m|S(x)) .
Model index

Computational target:
P(M = m|x) ∝

fm (x|θm )πm (θm ) dθm π(M = m)
Θm

If S(x) sufficient statistic for the joint parameters
(M, θ0 , . . . , θM−1 ),
P(M = m|x) = P(M = m|S(x)) .
Sufficient statistics in Gibbs random fields

Each model m has its own sufficient statistic Sm (·) and
S(·) = (S0 (·), . . . , SM−1 (·)) is also (model-)sufficient.
Explanation: For Gibbs random fields,
1
2
x|M = m ∼ fm (x|θm ) = fm (x|S(x))fm (S(x)|θm )
1
f 2 (S(x)|θm )
=
n(S(x)) m

where
˜
˜
n(S(x)) = {x ∈ X : S(x) = S(x)}
c S(x) is sufficient for the joint parameters
Sufficient statistics in Gibbs random fields

Each model m has its own sufficient statistic Sm (·) and
S(·) = (S0 (·), . . . , SM−1 (·)) is also (model-)sufficient.
Explanation: For Gibbs random fields,
1
2
x|M = m ∼ fm (x|θm ) = fm (x|S(x))fm (S(x)|θm )
1
=
f 2 (S(x)|θm )
n(S(x)) m

where
˜
˜
n(S(x)) = {x ∈ X : S(x) = S(x)}
c S(x) is sufficient for the joint parameters
Toy example

iid Bernoulli model versus two-state first-order Markov chain, i.e.
n

f0 (x|θ0 ) = exp θ0

I{xi =1}

{1 + exp(θ0 )}n ,

i=1

versus
f1 (x|θ1 ) =

1
exp θ1
2

n

I{xi =xi−1 }

{1 + exp(θ1 )}n−1 ,

i=2

with priors θ0 ∼ U(−5, 5) and θ1 ∼ U(0, 6) (inspired by “phase
transition” boundaries).
About sufficiency

If η1 (x) sufficient statistic for model m = 1 and parameter θ1 and
η2 (x) sufficient statistic for model m = 2 and parameter θ2 ,
(η1 (x), η2 (x)) is not always sufficient for (m, θm )
c Potential loss of information at the testing level
About sufficiency

If η1 (x) sufficient statistic for model m = 1 and parameter θ1 and
η2 (x) sufficient statistic for model m = 2 and parameter θ2 ,
(η1 (x), η2 (x)) is not always sufficient for (m, θm )
c Potential loss of information at the testing level
Poisson/geometric example

Sample
x = (x1 , . . . , xn )
from either a Poisson P(λ) or from a geometric G(p)
Sum
n

S=

xi = η(x)
i=1

sufficient statistic for either model but not simultaneously
Limiting behaviour of B12 (T → ∞)
ABC approximation
B12 (y) =

T
t=1 Imt =1 Iρ{η(zt ),η(y)}
T
t=1 Imt =2 Iρ{η(zt ),η(y)}

,

where the (mt , z t )’s are simulated from the (joint) prior
As T goes to infinity, limit
B12 (y) =
=

Iρ{η(z),η(y)}
Iρ{η(z),η(y)}
Iρ{η,η(y)}
Iρ{η,η(y)}

π1 (θ1 )f1 (z|θ1 ) dz dθ1
π2 (θ2 )f2 (z|θ2 ) dz dθ2
π1 (θ1 )f1η (η|θ1 ) dη dθ1
,
π2 (θ2 )f2η (η|θ2 ) dη dθ2

where f1η (η|θ1 ) and f2η (η|θ2 ) distributions of η(z)
Limiting behaviour of B12 (T → ∞)
ABC approximation
B12 (y) =

T
t=1 Imt =1 Iρ{η(zt ),η(y)}
T
t=1 Imt =2 Iρ{η(zt ),η(y)}

,

where the (mt , z t )’s are simulated from the (joint) prior
As T goes to infinity, limit
B12 (y) =
=

Iρ{η(z),η(y)}
Iρ{η(z),η(y)}
Iρ{η,η(y)}
Iρ{η,η(y)}

π1 (θ1 )f1 (z|θ1 ) dz dθ1
π2 (θ2 )f2 (z|θ2 ) dz dθ2
π1 (θ1 )f1η (η|θ1 ) dη dθ1
,
π2 (θ2 )f2η (η|θ2 ) dη dθ2

where f1η (η|θ1 ) and f2η (η|θ2 ) distributions of η(z)
Limiting behaviour of B12 ( → 0)

When

goes to zero,
η
B12 (y)

=

π1 (θ1 )f1η (η(y)|θ1 ) dθ1
π2 (θ2 )f2η (η(y)|θ2 ) dθ2

c Bayes factor based on the sole observation of η(y)
Limiting behaviour of B12 ( → 0)

When

goes to zero,
η
B12 (y)

=

π1 (θ1 )f1η (η(y)|θ1 ) dθ1
π2 (θ2 )f2η (η(y)|θ2 ) dθ2

c Bayes factor based on the sole observation of η(y)
Limiting behaviour of B12 (under sufficiency)

If η(y) sufficient statistic in both models,
fi (y|θi ) = gi (y)fi η (η(y)|θi )
Thus

=

Θ1

π(θ1 )g1 (y)f1η (η(y)|θ1 ) dθ1

Θ2

B12 (y) =

π(θ2 )g2 (y)f2η (η(y)|θ2 ) dθ2

g1 (y)
g2 (y)

π1 (θ1 )f1η (η(y)|θ1 ) dθ1
g1 (y) η
=
B (y) .
g2 (y) 12
π2 (θ2 )f2η (η(y)|θ2 ) dθ2
[Didelot, Everitt, Johansen & Lawson, 2011]

c No discrepancy only when cross-model sufficiency
Limiting behaviour of B12 (under sufficiency)

If η(y) sufficient statistic in both models,
fi (y|θi ) = gi (y)fi η (η(y)|θi )
Thus

=

Θ1

π(θ1 )g1 (y)f1η (η(y)|θ1 ) dθ1

Θ2

B12 (y) =

π(θ2 )g2 (y)f2η (η(y)|θ2 ) dθ2

g1 (y)
g2 (y)

π1 (θ1 )f1η (η(y)|θ1 ) dθ1
g1 (y) η
=
B (y) .
g2 (y) 12
π2 (θ2 )f2η (η(y)|θ2 ) dθ2
[Didelot, Everitt, Johansen & Lawson, 2011]

c No discrepancy only when cross-model sufficiency
Poisson/geometric example (back)

Sample
x = (x1 , . . . , xn )
from either a Poisson P(λ) or from a geometric G(p)
Discrepancy ratio
S!n−S / i xi !
g1 (x)
=
g2 (x)
1 n+S−1
S
Poisson/geometric discrepancy
η
Range of B12 (x) versus B12 (x): The values produced have nothing
in common.
Formal recovery

Creating an encompassing exponential family
f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θT η1 (x) + θT η2 (x) + α1 t1 (x) + α2 t2 (x)}
1
2
leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x))
[Didelot, Everitt, Johansen & Lawson, 2011]
Formal recovery

Creating an encompassing exponential family
f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θT η1 (x) + θT η2 (x) + α1 t1 (x) + α2 t2 (x)}
1
2
leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x))
[Didelot, Everitt, Johansen & Lawson, 2011]
In the Poisson/geometric case, if
discrepancy

i

xi ! is added to S, no
Formal recovery

Creating an encompassing exponential family
f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θT η1 (x) + θT η2 (x) + α1 t1 (x) + α2 t2 (x)}
1
2
leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x))
[Didelot, Everitt, Johansen & Lawson, 2011]
Only applies in genuine sufficiency settings...
c Inability to evaluate information loss due to summary
statistics
Meaning of the ABC-Bayes factor

The ABC approximation to the Bayes Factor is based solely on the
summary statistics....
In the Poisson/geometric case, if E[yi ] = θ0 > 0,
η
lim B12 (y) =

n→∞

(θ0 + 1)2 −θ0
e
θ0
Meaning of the ABC-Bayes factor

The ABC approximation to the Bayes Factor is based solely on the
summary statistics....
In the Poisson/geometric case, if E[yi ] = θ0 > 0,
η
lim B12 (y) =

n→∞

(θ0 + 1)2 −θ0
e
θ0
ABC model choice consistency

Intractable likelihoods
ABC methods
ABC as an inference machine
ABC for model choice
Model choice consistency
Formalised framework
Consistency results
Summary statistics
Conclusion and perspectives
The starting point

Central question to the validation of ABC for model choice:
When is a Bayes factor based on an insufficient statistic T(y)
consistent?
T
Note/warnin: c drawn on T(y) through B12 (y) necessarily differs
from c drawn on y through B12 (y)
[Marin, Pillai, X, & Rousseau, JRSS B, 2013]
The starting point

Central question to the validation of ABC for model choice:
When is a Bayes factor based on an insufficient statistic T(y)
consistent?
T
Note/warnin: c drawn on T(y) through B12 (y) necessarily differs
from c drawn on y through B12 (y)
[Marin, Pillai, X, & Rousseau, JRSS B, 2013]
A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks!]:
[X, Cornuet, Marin, & Pillai, Aug. 2011]
Model M1 : y ∼ N(θ1 , 1) opposed
√
to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2
√
and scale parameter 1/ 2 (variance one).
A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks!]:
[X, Cornuet, Marin, & Pillai, Aug. 2011]
Model M1 : y ∼ N(θ1 , 1) opposed
√
to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2
√
and scale parameter 1/ 2 (variance one).
Four possible statistics
1. sample mean y (sufficient for M1 if not M2 );
2. sample median med(y) (insufficient);
3. sample variance var(y) (ancillary);
4. median absolute deviation mad(y) = med(|y − med(y)|);
A benchmark if toy example
Comparison suggested by referee of PNAS paper [thanks!]:
[X, Cornuet, Marin, & Pillai, Aug. 2011]
Model M1 : y ∼ N(θ1 , 1) opposed
√
to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2
√
and scale parameter 1/ 2 (variance one).

0.3

0.4

0.5

0.6

0.7

n=100

q
q
q

0.1

0.2

q

q
q
q
q

0.0

q
q
q

Gauss

Laplace
A benchmark if toy example
Comparison suggested by referee of PNAS paper [thanks!]:
[X, Cornuet, Marin, & Pillai, Aug. 2011]
Model M1 : y ∼ N(θ1 , 1) opposed
√
to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2
√
and scale parameter 1/ 2 (variance one).
n=100

0.7

1.0

n=100

q

0.6

q

0.8

q
q

0.5

q
q
q

q
q
q
q

0.4

q
q
q

q

0.1

0.2

q

q

0.2

0.3

0.4

0.6

q

q
q
q

q
q

0.0

q
q
q

Gauss

Laplace

0.0

q

Gauss

Laplace
Framework

Starting from sample
y = (y1 , . . . , yn )
the observed sample, not necessarily iid with true distribution
y ∼ Pn
Summary statistics
T(y) = Tn = (T1 (y), T2 (y), · · · , Td (y)) ∈ Rd
with true distribution Tn ∼ Gn .
Framework

c Comparison of
– under M1 , y ∼ F1,n (·|θ1 ) where θ1 ∈ Θ1 ⊂ Rp1
– under M2 , y ∼ F2,n (·|θ2 ) where θ2 ∈ Θ2 ⊂ Rp2
turned into
– under M1 , T(y) ∼ G1,n (·|θ1 ), and θ1 |T(y) ∼ π1 (·|Tn )
– under M2 , T(y) ∼ G2,n (·|θ2 ), and θ2 |T(y) ∼ π2 (·|Tn )
Assumptions

A collection of asymptotic “standard” assumptions:
[A1] is a standard central limit theorem under the true model
[A2] controls the large deviations of the estimator Tn from the
estimand µ(θ)
[A3] is the standard prior mass condition found in Bayesian
asymptotics (di effective dimension of the parameter)
[A4] restricts the behaviour of the model density against the true
density
[Think CLT!]
Assumptions

A collection of asymptotic “standard” assumptions:
[Think CLT!]
[A1] There exist
a sequence {vn } converging to +∞,
a distribution Q,
a symmetric, d × d positive definite matrix V0 and
a vector µ0 ∈ Rd
such that
−1/2

vn V0

(Tn − µ0 )

n→∞

Q, under Gn
Assumptions

A collection of asymptotic “standard” assumptions:
[Think CLT!]
[A2] For i = 1, 2, there exist sets Fn,i ⊂ Θi , functions µi (θi ) and
constants i , τi , αi > 0 such that for all τ > 0,
sup
θi ∈Fn,i

Gi,n |Tn − µi (θi )| > τ|µi (θi ) − µ0 | ∧
(|τµi (θi ) − µ0 | ∧

−αi
i)

with
c
−τ
πi (Fn,i ) = o(vn i ).

i

|θi

−α
vn i
Assumptions

A collection of asymptotic “standard” assumptions:
[Think CLT!]
[A3] If inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0, defining (u > 0)
Sn,i (u) = θi ∈ Fn,i ; |µi (θi ) − µ0 |

−1
u vn
,

there exist constants di < τi ∧ αi − 1 such that
−d
πi (Sn,i (u)) ∼ u di vn i ,

∀u

vn
Assumptions

A collection of asymptotic “standard” assumptions:
[Think CLT!]
[A4] If inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0, for any
U, δ > 0 and (En )n such that, if θi ∈ Sn,i (U)
En ⊂ {t; gi (t|θi ) < δgn (t)}

> 0, there exist

c
and Gn (En ) < .
Assumptions

A collection of asymptotic “standard” assumptions:
[Think CLT!]
Again (sumup)
[A1] is a standard central limit theorem under the true model
[A2] controls the large deviations of the estimator Tn from the
estimand µ(θ)
[A3] is the standard prior mass condition found in Bayesian
asymptotics (di effective dimension of the parameter)
[A4] restricts the behaviour of the model density against the true
density
Effective dimension

Understanding di in [A3]:
defined only when µ0 ∈ {µi (θi ), θi ∈ Θi },
πi (θi : |µi (θi ) − µ0 | < n−1/2 ) = O(n−di /2 )
is the effective dimension of the model Θi around µ0
Effective dimension

Understanding di in [A3]:
when inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0,
mi (Tn )
−d
∼ vn i ,
gn (Tn )
thus
log(mi (Tn )/gn (Tn )) ∼ −di log vn
−d
and vn i penalization factor resulting from integrating θi out (see
effective number of parameters in DIC)
Effective dimension

Understanding di in [A3]:
In regular models, di dimension of T(Θi ), leading to BIC
In non-regular models, di can be smaller
Asymptotic marginals
Asymptotically, under [A1]–[A4]
mi (t) =

gi (t|θi ) πi (θi ) dθi
Θi

is such that
(i) if inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0,
d−d
Cl v n i

mi (Tn )

d−d
Cu vn i

and
(ii) if inf{|µi (θi ) − µ0 |; θi ∈ Θi } > 0
d−τ
d−α
mi (Tn ) = oPn [vn i + vn i ].
Within-model consistency

Under same assumptions,
if inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0,
the posterior distribution of µi (θi ) given Tn is consistent at rate
1/vn provided αi ∧ τi > di .
Between-model consistency

Consequence of above is that asymptotic behaviour of the Bayes
factor is driven by the asymptotic mean value of Tn under both
models. And only by this mean value!
Between-model consistency

Consequence of above is that asymptotic behaviour of the Bayes
factor is driven by the asymptotic mean value of Tn under both
models. And only by this mean value!
Indeed, if
inf{|µ0 − µ2 (θ2 )|; θ2 ∈ Θ2 } = inf{|µ0 − µ1 (θ1 )|; θ1 ∈ Θ1 } = 0
then
−(d1 −d2 )

Cl vn

m1 (Tn ) m2 (Tn )

−(d1 −d2 )

Cu vn

,

where Cl , Cu = OPn (1), irrespective of the true model.
c Only depends on the difference d1 − d2 : no consistency
Between-model consistency

Consequence of above is that asymptotic behaviour of the Bayes
factor is driven by the asymptotic mean value of Tn under both
models. And only by this mean value!
Else, if
inf{|µ0 − µ2 (θ2 )|; θ2 ∈ Θ2 } > inf{|µ0 − µ1 (θ1 )|; θ1 ∈ Θ1 } = 0
then

m1 (Tn )
m2 (Tn )

−(d1 −α2 )

Cu min vn

−(d1 −τ2 )

, vn
Consistency theorem

If
inf{|µ0 − µ2 (θ2 )|; θ2 ∈ Θ2 } = inf{|µ0 − µ1 (θ1 )|; θ1 ∈ Θ1 } = 0,
Bayes factor
−(d1 −d2 )

T
B12 = O(vn

)

irrespective of the true model. It is inconsistent since it always
picks the model with the smallest dimension
Consistency theorem

If Pn belongs to one of the two models and if µ0 cannot be
attained by the other one :
0 = min (inf{|µ0 − µi (θi )|; θi ∈ Θi }, i = 1, 2)
< max (inf{|µ0 − µi (θi )|; θi ∈ Θi }, i = 1, 2) ,
T
then the Bayes factor B12 is consistent
Consequences on summary statistics

Bayes factor driven by the means µi (θi ) and the relative position
of µ0 wrt both sets {µi (θi ); θi ∈ Θi }, i = 1, 2.
For ABC, this implies the simplest summary statistics Tn are
ancillary statistics with different mean values under both models
Else, if Tn asymptotically depends on some of the parameters of
the models, it is possible that there exists θi ∈ Θi such that
µi (θi ) = µ0 even though model M1 is misspecified
Consequences on summary statistics

Bayes factor driven by the means µi (θi ) and the relative position
of µ0 wrt both sets {µi (θi ); θi ∈ Θi }, i = 1, 2.
For ABC, this implies the simplest summary statistics Tn are
ancillary statistics with different mean values under both models
Else, if Tn asymptotically depends on some of the parameters of
the models, it is possible that there exists θi ∈ Θi such that
µi (θi ) = µ0 even though model M1 is misspecified
Toy example: Laplace versus Gauss [1]

If
n

Tn = n−1

Xi4 ,

µ1 (θ) = 3 + θ4 + 6θ2 ,

µ2 (θ) = 6 + · · ·

i=1

and the true distribution is Laplace with mean θ0 = 1, under the
√
Gaussian model the value θ∗ = 2 3 − 3 also leads to µ0 = µ(θ∗ )
[here d1 = d2 = d = 1]
Toy example: Laplace versus Gauss [1]

If
n
n

T =n

Xi4 ,

−1

µ1 (θ) = 3 + θ4 + 6θ2 ,

µ2 (θ) = 6 + · · ·

i=1

and the true distribution is Laplace with mean θ0 = 1, under the
√
Gaussian model the value θ∗ = 2 3 − 3 also leads to µ0 = µ(θ∗ )
[here d1 = d2 = d = 1]
c A Bayes factor associated with such a statistic is inconsistent
Toy example: Laplace versus Gauss [1]

If
n

Tn = n−1

Xi4 ,
i=1

µ1 (θ) = 3 + θ4 + 6θ2 ,

µ2 (θ) = 6 + · · ·
Toy example: Laplace versus Gauss [1]
If
n

Tn = n−1

Xi4 ,

µ1 (θ) = 3 + θ4 + 6θ2 ,

i=1

q

0.35

0.40

0.45

n=1000

q
q

0.30

q

q

M1

M2

Fourth moment

µ2 (θ) = 6 + · · ·
Toy example: Laplace versus Gauss [0]
When
(4)

(6)

¯
¯
T(y) = yn , yn

and the true distribution is Laplace with mean θ0 = 0, then
√
µ0 = 6, µ1 (θ∗ ) = 6 with θ∗ = 2 3 − 3
1
1
[d1 = 1 and d2 = 1/2]
thus
B12 ∼ n−1/4 → 0 : consistent
Under the Gaussian model µ0 = 3, µ2 (θ2 )
B12 → +∞ :

6 > 3 ∀θ2

consistent
Toy example: Laplace versus Gauss [0]
When
(4)

(6)

¯
¯
T(y) = yn , yn

and the true distribution is Laplace with mean θ0 = 0, then
√
µ0 = 6, µ1 (θ∗ ) = 6 with θ∗ = 2 3 − 3
1
1
[d1 = 1 and d2 = 1/2]
thus
B12 ∼ n−1/4 → 0 : consistent
Under the Gaussian model µ0 = 3, µ2 (θ2 )
B12 → +∞ :

6 > 3 ∀θ2

consistent
Toy example: Laplace versus Gauss [0]
When
(4)

(6)

¯
¯
T(y) = yn , yn

0.8

q
q
q
q
q

0.6

q

0.6

0.6

0.8

q

n=10000
q
q
q
q
q
q

1.0

q
q

0.8

1.0

n=1000

1.0

n=100

q
q

Gauss

Laplace

0.4
0.0

0.2

0.4
0.2
0.0

0.0

0.2

0.4

q
q

Gauss

Laplace

Fourth and sixth moments

Gauss

Laplace
Checking for adequate statistics

Run a practical check of the relevance (or non-relevance) of Tn
null hypothesis that both models are compatible with the statistic
Tn
H0 : inf{|µ2 (θ2 ) − µ0 |; θ2 ∈ Θ2 } = 0
against
H1 : inf{|µ2 (θ2 ) − µ0 |; θ2 ∈ Θ2 } > 0
testing procedure provides estimates of mean of Tn under each
model and checks for equality
Checking in practice
Under each model Mi , generate ABC sample θi,l , l = 1, · · · , L
For each θi,l , generate yi,l ∼ Fi,n (·|ψi,l ), derive Tn (yi,l ) and
compute
L
1
Tn (yi,l ), i = 1, 2 .
µi =
^
L
l=1

n

Conditionally on T (y),
√
L(µi − Eπ [µi (θi )|Tn (y)])
^

N(0, Vi ),

Test for a common mean
H0 : µ1 ∼ N(µ0 , V1 ) , µ2 ∼ N(µ0 , V2 )
^
^
against the alternative of different means
H1 : µi ∼ N(µi , Vi ),
^

with µ1 = µ2 .
Toy example: Laplace versus Gauss

q

q

0

10

20

30

40

q

q

Gauss

q
q
q

Laplace

Gauss

Laplace

Normalised χ2 without and with mad
A population genetic illustration
Two populations (1 and 2) having diverged at a fixed known time
in the past and third population (3) which diverged from one of
those two populations (models 1 and 2, respectively).
Observation of 50 diploid individuals/population genotyped at 5,
50 or 100 independent microsatellite loci.

Model 2
A population genetic illustration
Two populations (1 and 2) having diverged at a fixed known time
in the past and third population (3) which diverged from one of
those two populations (models 1 and 2, respectively).
Observation of 50 diploid individuals/population genotyped at 5,
50 or 100 independent microsatellite loci.
Stepwise mutation model: the number of repeats of the mutated
gene increases or decreases by one. Mutation rate µ common to all
loci set to 0.005 (single parameter) with uniform prior distribution
µ ∼ U[0.0001, 0.01]
A population genetic illustration

Summary statistics associated to the (δµ )2 distance
xl,i,j repeated number of allele in locus l = 1, . . . , L for individual
i = 1, . . . , 100 within the population j = 1, 2, 3. Then
(δµ )21 ,j2 =
j

1
L

L
l=1


 1
100

100

xl,i1 ,j1 −
i1 =1

1
100

100

2
xl,i2 ,j2  .

i2 =1
A population genetic illustration

For two copies of locus l with allele sizes xl,i,j1 and xl,i ,j2 , most
recent common ancestor at coalescence time τj1 ,j2 , gene genealogy
distance of 2τj1 ,j2 , hence number of mutations Poisson with
parameter 2µτj1 ,j2 . Therefore,
xl,i,j1 − xl,i

E

2
,j2

|τj1 ,j2

= 2µτj1 ,j2

and
Model 1 Model 2
(δµ )2
1,2

2µ1 t

2µ2 t

E (δµ )2
1,3

2µ1 t

2µ2 t

E (δµ )2
2,3

2µ1 t

2µ2 t

E
A population genetic illustration

Thus,
Bayes factor based only on distance (δµ )2 not convergent: if
1,2
µ1 = µ2 , same expectation
Bayes factor based only on distance (δµ )2 or (δµ )2 not
1,3
2,3
convergent: if µ1 = 2µ2 or 2µ1 = µ2 same expectation
if two of the three distances are used, Bayes factor converges:
there is no (µ1 , µ2 ) for which all expectations are equal
A population genetic illustration

0.8

DM2(12)

q

50

100

0.0

0.4

q

q

5

0.8

DM2(13)

0.4

q
q
q
q

q

q
q
q
q
q

q

0.0

q

5

50

100

DM2(13) & DM2(23)
0.8

q
q
q
q
q
q

q
q
q
q
q
q

q
q

0.4

q
q
q
q

0.0

q

q
q

q

q

5

50

100

Posterior probabilities that the data is from model 1 for 5, 50
and 100 loci
A population genetic illustration

20

0.2

40

0.4

60

80

0.6

100

0.8

120

1.0

140

p−values

0

0.0

q
q
q

DM2(12)

DM2(13) & DM2(23)

q

DM2(12)

DM2(13) & DM2(23)
Embedded models

When M1 submodel of M2 , and if the true distribution belongs to
the smaller model M1 , Bayes factor is of order
−(d1 −d2 )

vn
Embedded models

When M1 submodel of M2 , and if the true distribution belongs to
the smaller model M1 , Bayes factor is of order
−(d1 −d2 )

vn

If summary statistic only informative on a parameter that is the
same under both models, i.e if d1 = d2 , then
c the Bayes factor is not consistent
Embedded models

When M1 submodel of M2 , and if the true distribution belongs to
the smaller model M1 , Bayes factor is of order
−(d1 −d2 )

vn

Else, d1 < d2 and Bayes factor is going to ∞ under M1 . If true
distribution not in M1 , then
c Bayes factor is consistent only if µ1 = µ2 = µ0
Summary

Model selection feasible with ABC
Choice of summary statistics is paramount
At best, ABC → π(. | T(y)) which concentrates around µ0
For consistent estimation : {θ; µ(θ) = µ0 } = θ0
For consistent testing
{µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
Summary

Model selection feasible with ABC
Choice of summary statistics is paramount
At best, ABC → π(. | T(y)) which concentrates around µ0
For consistent estimation : {θ; µ(θ) = µ0 } = θ0
For consistent testing
{µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
Conclusion/perspectives

abc part of a wider picture
to handle complex/Big Data
models, able to start from
rudimentary machine
learning summaries
many formats of empirical
[likelihood] Bayes methods
available
lack of comparative tools
and of an assessment for
information loss

Contenu connexe

Similaire à Pittsburgh and Toronto "Halloween US trip" seminars

ABC & Empirical Lkd
ABC & Empirical LkdABC & Empirical Lkd
ABC & Empirical Lkd
Deb Roy
 
Intro to Model Selection
Intro to Model SelectionIntro to Model Selection
Intro to Model Selection
chenhm
 

Similaire à Pittsburgh and Toronto "Halloween US trip" seminars (20)

ABC in Varanasi
ABC in VaranasiABC in Varanasi
ABC in Varanasi
 
(Approximate) Bayesian computation as a new empirical Bayes (something)?
(Approximate) Bayesian computation as a new empirical Bayes (something)?(Approximate) Bayesian computation as a new empirical Bayes (something)?
(Approximate) Bayesian computation as a new empirical Bayes (something)?
 
ABC and empirical likelihood
ABC and empirical likelihoodABC and empirical likelihood
ABC and empirical likelihood
 
ABC and empirical likelihood
ABC and empirical likelihoodABC and empirical likelihood
ABC and empirical likelihood
 
Is ABC a new empirical Bayes approach?
Is ABC a new empirical Bayes approach?Is ABC a new empirical Bayes approach?
Is ABC a new empirical Bayes approach?
 
ABC in Venezia
ABC in VeneziaABC in Venezia
ABC in Venezia
 
ABC & Empirical Lkd
ABC & Empirical LkdABC & Empirical Lkd
ABC & Empirical Lkd
 
NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015
 
Intro to Model Selection
Intro to Model SelectionIntro to Model Selection
Intro to Model Selection
 
from model uncertainty to ABC
from model uncertainty to ABCfrom model uncertainty to ABC
from model uncertainty to ABC
 
Approximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forestsApproximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forests
 
Intractable likelihoods
Intractable likelihoodsIntractable likelihoods
Intractable likelihoods
 
Boston talk
Boston talkBoston talk
Boston talk
 
Discussion of Persi Diaconis' lecture at ISBA 2016
Discussion of Persi Diaconis' lecture at ISBA 2016Discussion of Persi Diaconis' lecture at ISBA 2016
Discussion of Persi Diaconis' lecture at ISBA 2016
 
Approximate Bayesian computation and machine learning (BigMC 2014)
Approximate Bayesian computation and machine learning (BigMC 2014)Approximate Bayesian computation and machine learning (BigMC 2014)
Approximate Bayesian computation and machine learning (BigMC 2014)
 
Likelihood free computational statistics
Likelihood free computational statisticsLikelihood free computational statistics
Likelihood free computational statistics
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
2018 Modern Math Workshop - Nonparametric Regression and Classification for M...
2018 Modern Math Workshop - Nonparametric Regression and Classification for M...2018 Modern Math Workshop - Nonparametric Regression and Classification for M...
2018 Modern Math Workshop - Nonparametric Regression and Classification for M...
 
Daniel Lee STAN
Daniel Lee STANDaniel Lee STAN
Daniel Lee STAN
 
3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
3rd NIPS Workshop on PROBABILISTIC PROGRAMMING3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
 

Plus de Christian Robert

Plus de Christian Robert (20)

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
 
discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like sampler
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
the ABC of ABC
the ABC of ABCthe ABC of ABC
the ABC of ABC
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
 

Dernier

Dernier (20)

How to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptxHow to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptx
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdfUnit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptxHMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
REMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptxREMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptx
 
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptxOn_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 

Pittsburgh and Toronto "Halloween US trip" seminars

  • 1. Relevant statistics for (approximate) Bayesian model choice Christian P. Robert Universit´ Paris-Dauphine, Paris & University of Warwick, Coventry e bayesianstatistics@gmail.com
  • 2. a meeting of potential interest?! MCMSki IV to be held in Chamonix Mt Blanc, France, from Monday, Jan. 6 to Wed., Jan. 8 (and BNP workshop on Jan. 9), 2014 All aspects of MCMC++ theory and methodology and applications and more! All posters welcomed! Website http://www.pages.drexel.edu/∼mwl25/mcmski/
  • 3. Outline Intractable likelihoods ABC methods ABC as an inference machine ABC for model choice Model choice consistency Conclusion and perspectives
  • 4. Intractable likelihood Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1 , . . . , yn |θ) is (really!) not available in closed form cannot (easily!) be either completed or demarginalised cannot be (at all!) estimated by an unbiased estimator c Prohibits direct implementation of a generic MCMC algorithm like Metropolis–Hastings
  • 5. Intractable likelihood Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1 , . . . , yn |θ) is (really!) not available in closed form cannot (easily!) be either completed or demarginalised cannot be (at all!) estimated by an unbiased estimator c Prohibits direct implementation of a generic MCMC algorithm like Metropolis–Hastings
  • 6. Getting approximative Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1 , . . . , yn |θ) is out of reach Empirical approximations to the original B problem Degrading the data precision down to a tolerance ε Replacing the likelihood with a non-parametric approximation Summarising/replacing the data with insufficient statistics
  • 7. Getting approximative Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1 , . . . , yn |θ) is out of reach Empirical approximations to the original B problem Degrading the data precision down to a tolerance ε Replacing the likelihood with a non-parametric approximation Summarising/replacing the data with insufficient statistics
  • 8. Getting approximative Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1 , . . . , yn |θ) is out of reach Empirical approximations to the original B problem Degrading the data precision down to a tolerance ε Replacing the likelihood with a non-parametric approximation Summarising/replacing the data with insufficient statistics
  • 9. Different [levels of] worries about abc Impact on B inference a mere computational issue (that will eventually end up being solved by more powerful computers, &tc, even if too costly in the short term, as for regular Monte Carlo methods) Not! an inferential issue (opening opportunities for new inference machine on complex/Big Data models, with legitimity differing from classical B approach) a Bayesian conundrum (how closely related to the/a B approach? more than recycling B tools? true B with raw data? a new type of empirical B inference?)
  • 10. Different [levels of] worries about abc Impact on B inference a mere computational issue (that will eventually end up being solved by more powerful computers, &tc, even if too costly in the short term, as for regular Monte Carlo methods) Not! an inferential issue (opening opportunities for new inference machine on complex/Big Data models, with legitimity differing from classical B approach) a Bayesian conundrum (how closely related to the/a B approach? more than recycling B tools? true B with raw data? a new type of empirical B inference?)
  • 11. Different [levels of] worries about abc Impact on B inference a mere computational issue (that will eventually end up being solved by more powerful computers, &tc, even if too costly in the short term, as for regular Monte Carlo methods) Not! an inferential issue (opening opportunities for new inference machine on complex/Big Data models, with legitimity differing from classical B approach) a Bayesian conundrum (how closely related to the/a B approach? more than recycling B tools? true B with raw data? a new type of empirical B inference?)
  • 12. Approximate Bayesian computation Intractable likelihoods ABC methods Genesis of ABC ABC basics Advances and interpretations ABC as knn ABC as an inference machine ABC for model choice Model choice consistency Conclusion and perspectives
  • 13. Genetic background of ABC skip genetics ABC is a recent computational technique that only requires being able to sample from the likelihood f (·|θ) This technique stemmed from population genetics models, about 15 years ago, and population geneticists still contribute significantly to methodological developments of ABC. [Griffith & al., 1997; Tavar´ & al., 1999] e
  • 14. Demo-genetic inference Each model is characterized by a set of parameters θ that cover historical (time divergence, admixture time ...), demographics (population sizes, admixture rates, migration rates, ...) and genetic (mutation rate, ...) factors The goal is to estimate these parameters from a dataset of polymorphism (DNA sample) y observed at the present time Problem: most of the time, we cannot calculate the likelihood of the polymorphism data f (y|θ)...
  • 15. Demo-genetic inference Each model is characterized by a set of parameters θ that cover historical (time divergence, admixture time ...), demographics (population sizes, admixture rates, migration rates, ...) and genetic (mutation rate, ...) factors The goal is to estimate these parameters from a dataset of polymorphism (DNA sample) y observed at the present time Problem: most of the time, we cannot calculate the likelihood of the polymorphism data f (y|θ)...
  • 16. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Sample of 8 genes Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2
  • 17. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Kingman’s genealogy When time axis is normalized, T (k) ∼ Exp(k(k − 1)/2) Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2
  • 18. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Kingman’s genealogy When time axis is normalized, T (k) ∼ Exp(k(k − 1)/2) Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2
  • 19. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Kingman’s genealogy When time axis is normalized, T (k) ∼ Exp(k(k − 1)/2) Observations: leafs of the tree ^ θ=? Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2
  • 20. Much more interesting models. . . several independent locus Independent gene genealogies and mutations different populations linked by an evolutionary scenario made of divergences, admixtures, migrations between populations, selection pressure, etc. larger sample size usually between 50 and 100 genes MRCA τ2 τ1 A typical evolutionary scenario: POP 0 POP 1 POP 2
  • 21. Instance of ecological questions How the Asian ladybird beetle arrived in Europe? Why does they swarm right now? What are the routes of invasion? How to get rid of them? Why did the chicken cross the road? [Lombaert & al., 2010, PLoS ONE]
  • 22. Worldwide invasion routes of Harmonia axyridis Worldwide routes of invasion of Harmonia axyridis For each outbreak, the arrow indicates the most likely invasion pathway and the associated posterior probability, with 95% credible intervals in brackets [Estoup et al., 2012, Molecular Ecology Res.]
  • 23. c Intractable likelihood Missing (too much missing!) data structure: f (y|θ) = G f (y|G , θ)f (G |θ)dG cannot be computed in a manageable way... [Stephens & Donnelly, 2000] The genealogies are considered as nuisance parameters This modelling clearly differs from the phylogenetic perspective where the tree is the parameter of interest.
  • 24. c Intractable likelihood Missing (too much missing!) data structure: f (y|θ) = G f (y|G , θ)f (G |θ)dG cannot be computed in a manageable way... [Stephens & Donnelly, 2000] The genealogies are considered as nuisance parameters This modelling clearly differs from the phylogenetic perspective where the tree is the parameter of interest.
  • 25. A?B?C? A stands for approximate [wrong likelihood / pic?] B stands for Bayesian C stands for computation [producing a parameter sample]
  • 26. A?B?C? A stands for approximate [wrong likelihood / pic?] B stands for Bayesian C stands for computation [producing a parameter sample]
  • 27. A?B?C? ESS=75.93 ESS=76.87 0.2 0.5 −0.2 −0.2 0.2 0.6 0.0 0.2 0.0 0.2 0.4 0.6 3 4 θ ESS=83.77 Density 0 0.0 0.2 0.4 0.6 −0.6 −0.4 θ ESS=92.42 −0.2 0.0 0.2 0.4 0.4 0.6 3.0 θ ESS=95.01 Density −0.4 0.0 0.2 0.4 0.6 −0.4 θ ESS=99.33 0.0 0.2 Density 3 θ ESS=87.28 0 Density 2.0 1.0 −0.6 −0.2 θ ESS=85.13 0.0 Density 2.0 1.0 0.0 θ ESS=139.2 3 0.6 2.0 Density −0.4 0.0 −0.5 2 Density 0 0.4 0.0 1.0 0.0 1.0 2.0 3.0 0.5 0.0 1.0 2.0 3.0 0.0 θ ESS=155.7 Density 0.2 1.0 2.0 1.0 Density 0.0 −0.5 Density 0.0 θ ESS=96.31 0.0 C stands for computation [producing a parameter sample] −0.4 Density −0.4 θ ESS=149.1 B stands for Bayesian 0.4 2 0.0 0.2 1 −0.2 0.0 θ ESS=108.4 1.5 Density −0.4 −0.2 0.0 1.0 2.0 3.0 −0.4 2 4 3 2 1 −0.6 1 2.0 1.0 0 Density 0.5 1 0.0 θ ESS=91.54 0.0 1.0 2.0 3.0 −0.5 A stands for approximate [wrong likelihood / pic?] 1.0 Density 0.0 1.0 0.0 Density 4 ESS=155.6 −0.4 −0.2 0.0 0.2 0.4 −0.2 0.0 0.2 0.4 0.6
  • 28. How Bayesian is aBc? Could we turn the resolution into a Bayesian answer? ideally so (not meaningfull: requires ∞-ly powerful computer approximation error unknown (w/o costly simulation) true B for wrong model (formal and artificial) true B for noisy model (much more convincing) true B for estimated likelihood (back to econometrics?) illuminating the tension between information and precision
  • 29. ABC methodology Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: Foundation For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y, then the selected θ ∼ π(θ|y) [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997] e
  • 30. ABC methodology Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: Foundation For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y, then the selected θ ∼ π(θ|y) [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997] e
  • 31. ABC methodology Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: Foundation For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y, then the selected θ ∼ π(θ|y) [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997] e
  • 32. A as A...pproximative When y is a continuous random variable, strict equality z = y is replaced with a tolerance zone ρ(y, z) where ρ is a distance Output distributed from def π(θ) Pθ {ρ(y, z) < } ∝ π(θ|ρ(y, z) < ) [Pritchard et al., 1999]
  • 33. A as A...pproximative When y is a continuous random variable, strict equality z = y is replaced with a tolerance zone ρ(y, z) where ρ is a distance Output distributed from def π(θ) Pθ {ρ(y, z) < } ∝ π(θ|ρ(y, z) < ) [Pritchard et al., 1999]
  • 34. ABC algorithm In most implementations, further degree of A...pproximation: Algorithm 1 Likelihood-free rejection sampler for i = 1 to N do repeat generate θ from the prior distribution π(·) generate z from the likelihood f (·|θ ) until ρ{η(z), η(y)} set θi = θ end for where η(y) defines a (not necessarily sufficient) statistic
  • 35. Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = where A ,y π(θ)f (z|θ)IA ,y (z) , A ,y ×Θ π(θ)f (z|θ)dzdθ = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) . ...does it?!
  • 36. Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = where A ,y π(θ)f (z|θ)IA ,y (z) , A ,y ×Θ π(θ)f (z|θ)dzdθ = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) . ...does it?!
  • 37. Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = where A ,y π(θ)f (z|θ)IA ,y (z) , A ,y ×Θ π(θ)f (z|θ)dzdθ = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) . ...does it?!
  • 38. Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = where A ,y π(θ)f (z|θ)IA ,y (z) , A ,y ×Θ π(θ)f (z|θ)dzdθ = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the restricted posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) . Not so good..!
  • 39. Comments Role of distance paramount (because = 0) Scaling of components of η(y) is also determinant matters little if “small enough” representative of “curse of dimensionality” small is beautiful! the data as a whole may be paradoxically weakly informative for ABC
  • 40. curse of dimensionality in summaries Consider that the simulated summary statistics η(y1 ), . . . , η(yN ) the observed summary statistics η(yobs ) are iid with uniform distribution on [0, 1]d Take δ∞ (d, N) = E δ∞ (1, N) δ∞ (2, N) δ∞ (10, N) δ∞ (200, N) min i=1,...,N N = 100 0.0025 0.033 0.28 0.48 η(yobs ) − η(yi ) N = 1, 000 0.00025 0.01 0.22 0.48 ∞ N = 10, 000 0.000025 0.0033 0.18 0.47 N = 100, 000 0.0000025 0.001 0.14 0.46
  • 41. ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y ... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ ] [Ratmann et al., 2009]
  • 42. ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y ... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ ] [Ratmann et al., 2009]
  • 43. ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y ... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ ] [Ratmann et al., 2009]
  • 44. ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y ... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ ] [Ratmann et al., 2009]
  • 45. ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance distances, say 10% or 1% quantile, = N as a quantile on observed = qα (d1 , . . . , dN ) Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸ois, 2010] c ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965]
  • 46. ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance distances, say 10% or 1% quantile, = N as a quantile on observed = qα (d1 , . . . , dN ) Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸ois, 2010] c ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965]
  • 47. ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance distances, say 10% or 1% quantile, = N as a quantile on observed = qα (d1 , . . . , dN ) Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸ois, 2010] c ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965]
  • 48. ABC consistency Provided kN / log log N −→ ∞ and kN /N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, 1 kN kN ϕ(θj ) −→ E[ϕ(θj )|S = s0 ] j=1 [Devroye, 1982] Biau et al. (2013) also recall pointwise and integrated mean square error consistency results on the corresponding kernel estimate of the conditional posterior distribution, under constraints kN → ∞, kN /N → 0, hN → 0 p and hN kN → ∞,
  • 49. ABC consistency Provided kN / log log N −→ ∞ and kN /N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, 1 kN kN ϕ(θj ) −→ E[ϕ(θj )|S = s0 ] j=1 [Devroye, 1982] Biau et al. (2013) also recall pointwise and integrated mean square error consistency results on the corresponding kernel estimate of the conditional posterior distribution, under constraints kN → ∞, kN /N → 0, hN → 0 p and hN kN → ∞,
  • 50. Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like 4 when m = 1, 2, 3, kN ≈ N (p+4)/(p+8) and rate N − p+8 4 when m = 4, kN ≈ N (p+4)/(p+8) and rate N − p+8 log N 4 when m > 4, kN ≈ N (p+4)/(m+p+4) and rate N − m+p+4 [Biau et al., 2013] Drag: Only applies to sufficient summary statistics
  • 51. Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like 4 when m = 1, 2, 3, kN ≈ N (p+4)/(p+8) and rate N − p+8 4 when m = 4, kN ≈ N (p+4)/(p+8) and rate N − p+8 log N 4 when m > 4, kN ≈ N (p+4)/(m+p+4) and rate N − m+p+4 [Biau et al., 2013] Drag: Only applies to sufficient summary statistics
  • 52. ABC inference machine Intractable likelihoods ABC methods ABC as an inference machine Error inc. Exact BC and approximate targets summary statistic ABC for model choice Model choice consistency Conclusion and perspectives
  • 53. How Bayesian is aBc..? may be a convergent method of inference (meaningful? sufficient? foreign?) approximation error unknown (w/o massive simulation) pragmatic/empirical B (there is no other solution!) many calibration issues (tolerance, distance, statistics) the NP side should be incorporated into the whole B picture the approximation error should also be part of the B inference
  • 54. ABCµ Idea Infer about the error as well as about the parameter: Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ) Warning! Replacement of ξ( |y, θ) with a non-parametric kernel approximation. [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  • 55. ABCµ Idea Infer about the error as well as about the parameter: Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ) Warning! Replacement of ξ( |y, θ) with a non-parametric kernel approximation. [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  • 56. ABCµ Idea Infer about the error as well as about the parameter: Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ) Warning! Replacement of ξ( |y, θ) with a non-parametric kernel approximation. [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  • 57. Wilkinson’s exact BC (not exactly!) ABC approximation error (i.e. non-zero tolerance) replaced with exact simulation from a controlled approximation to the target, convolution of true posterior with kernel function π (θ, z|y) = π(θ)f (z|θ)K (y − z) , π(θ)f (z|θ)K (y − z)dzdθ with K kernel parameterised by bandwidth . [Wilkinson, 2013] Theorem The ABC algorithm based on the production of a randomised ˜ observation y = y + ξ, ξ ∼ K , and an acceptance probability of K (y − z)/M gives draws from the posterior distribution π(θ|y).
  • 58. Wilkinson’s exact BC (not exactly!) ABC approximation error (i.e. non-zero tolerance) replaced with exact simulation from a controlled approximation to the target, convolution of true posterior with kernel function π (θ, z|y) = π(θ)f (z|θ)K (y − z) , π(θ)f (z|θ)K (y − z)dzdθ with K kernel parameterised by bandwidth . [Wilkinson, 2013] Theorem The ABC algorithm based on the production of a randomised ˜ observation y = y + ξ, ξ ∼ K , and an acceptance probability of K (y − z)/M gives draws from the posterior distribution π(θ|y).
  • 59. How exact a BC? Pros Pseudo-data from true model vs. observed data from noisy model Outcome is completely controlled Link with ABCµ when assuming y observed with measurement error with density K Relates to the theory of model approximation [Kennedy & O’Hagan, 2001] leads to “noisy ABC”: just do it!, i.e. perturbates the data down to precision ε and proceed [Fearnhead & Prangle, 2012] Cons Requires K to be bounded by M True approximation error never assessed
  • 60. How exact a BC? Pros Pseudo-data from true model vs. observed data from noisy model Outcome is completely controlled Link with ABCµ when assuming y observed with measurement error with density K Relates to the theory of model approximation [Kennedy & O’Hagan, 2001] leads to “noisy ABC”: just do it!, i.e. perturbates the data down to precision ε and proceed [Fearnhead & Prangle, 2012] Cons Requires K to be bounded by M True approximation error never assessed
  • 61. Which summary? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistics Loss of statistical information balanced against gain in data roughening Approximation error and information loss remain unknown Choice of statistics induces choice of distance function towards standardisation may be imposed for external/practical reasons (e.g., DIYABC) may gather several non-B point estimates can learn about efficient combination
  • 62. Which summary? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistics Loss of statistical information balanced against gain in data roughening Approximation error and information loss remain unknown Choice of statistics induces choice of distance function towards standardisation may be imposed for external/practical reasons (e.g., DIYABC) may gather several non-B point estimates can learn about efficient combination
  • 63. Which summary? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistics Loss of statistical information balanced against gain in data roughening Approximation error and information loss remain unknown Choice of statistics induces choice of distance function towards standardisation may be imposed for external/practical reasons (e.g., DIYABC) may gather several non-B point estimates can learn about efficient combination
  • 64. Semi-automatic ABC Fearnhead and Prangle (2012) study ABC and the selection of the summary statistic in close proximity to Wilkinson’s proposal ABC considered as inferential method and calibrated as such randomised (or ‘noisy’) version of the summary statistics ˜ η(y) = η(y) + τ derivation of a well-calibrated version of ABC, i.e. an algorithm that gives proper predictions for the distribution associated with this randomised summary statistic [weak property]
  • 65. Summary [of F&P/statistics) optimality of the posterior expectation E[θ|y] of the parameter of interest as summary statistics η(y)! [requires iterative process] use of the standard quadratic loss function (θ − θ0 )T A(θ − θ0 ) recent extension to model choice, optimality of Bayes factor B12 (y) [F&P, ISBA 2012, Kyoto]
  • 66. Summary [of F&P/statistics) optimality of the posterior expectation E[θ|y] of the parameter of interest as summary statistics η(y)! [requires iterative process] use of the standard quadratic loss function (θ − θ0 )T A(θ − θ0 ) recent extension to model choice, optimality of Bayes factor B12 (y) [F&P, ISBA 2012, Kyoto]
  • 67. LDA summaries for model choice In parallel to F& P semi-automatic ABC, selection of most discriminant subvector out of a collection of summary statistics, based on Linear Discriminant Analysis (LDA) [Estoup & al., 2012, Mol. Ecol. Res.] Solution now implemented in DIYABC.2 [Cornuet & al., 2008, Bioinf.; Estoup & al., 2013]
  • 68. Implementation Step 1: Rake a subset of the α% (e.g., 1%) best simulations from an ABC reference table usually including 106 –109 simulations for each of the M compared scenarios/models Selection based on normalized Euclidian distance computed between observed and simulated raw summaries Step 2: run LDA on this subset to transform summaries into (M − 1) discriminant variables When computing LDA functions, weight simulated data with an Epanechnikov kernel Step 3: Estimation of the posterior probabilities of each competing scenario/model by polychotomous local logistic regression against the M − 1 most discriminant variables [Cornuet & al., 2008, Bioinformatics]
  • 69. LDA advantages much faster computation of scenario probabilities via polychotomous regression a (much) lower number of explanatory variables improves the accuracy of the ABC approximation, reduces the tolerance and avoids extra costs in constructing the reference table allows for a large collection of initial summaries ability to evaluate Type I and Type II errors on more complex models LDA reduces correlation among explanatory variables When available, using both simulated and real data sets, posterior probabilities of scenarios computed from LDA-transformed and raw summaries are strongly correlated
  • 70. LDA advantages much faster computation of scenario probabilities via polychotomous regression a (much) lower number of explanatory variables improves the accuracy of the ABC approximation, reduces the tolerance and avoids extra costs in constructing the reference table allows for a large collection of initial summaries ability to evaluate Type I and Type II errors on more complex models LDA reduces correlation among explanatory variables When available, using both simulated and real data sets, posterior probabilities of scenarios computed from LDA-transformed and raw summaries are strongly correlated
  • 71. ABC for model choice Intractable likelihoods ABC methods ABC as an inference machine ABC for model choice BMC Principle Gibbs random fields (counterexample) Generic ABC model choice Model choice consistency Conclusion and perspectives
  • 72. Bayesian model choice BMC Principle Several models M1 , M2 , . . . are considered simultaneously for dataset y and model index M central to inference. Use of prior π(M = m), plus prior distribution on the parameter conditional on the value m of the model index, πm (θm )
  • 73. Bayesian model choice BMC Principle Several models M1 , M2 , . . . are considered simultaneously for dataset y and model index M central to inference. Goal is to derive the posterior distribution of M, π(M = m|data) a challenging computational target when models are complex.
  • 74. Generic ABC for model choice Algorithm 2 Likelihood-free model choice sampler (ABC-MC) for t = 1 to T do repeat Generate m from the prior π(M = m) Generate θm from the prior πm (θm ) Generate z from the model fm (z|θm ) until ρ{η(z), η(y)} < Set m(t) = m and θ(t) = θm end for [Grelaud & al., 2009; Toni & al., 2009]
  • 75. ABC estimates Posterior probability π(M = m|y) approximated by the frequency of acceptances from model m 1 T T Im(t) =m . t=1
  • 76. ABC estimates Posterior probability π(M = m|y) approximated by the frequency of acceptances from model m 1 T T Im(t) =m . t=1 Extension to a weighted polychotomous logistic regression estimate of π(M = m|y), with non-parametric kernel weights [Cornuet et al., DIYABC, 2009]
  • 77. Potts model Skip MRFs Potts model Distribution with an energy function of the form θS(y) = θ δyl =yi l∼i where l∼i denotes a neighbourhood structure In most realistic settings, summation Zθ = exp{θS(x)} x∈X involves too many terms to be manageable and numerical approximations cannot always be trusted
  • 78. Potts model Skip MRFs Potts model Distribution with an energy function of the form θS(y) = θ δyl =yi l∼i where l∼i denotes a neighbourhood structure In most realistic settings, summation Zθ = exp{θS(x)} x∈X involves too many terms to be manageable and numerical approximations cannot always be trusted
  • 79. Neighbourhood relations Setup Choice to be made between M neighbourhood relations m i ∼i m (0 M − 1) with Sm (x) = I{xi =xi } m i ∼i driven by the posterior probabilities of the models.
  • 80. Model index Computational target: P(M = m|x) ∝ fm (x|θm )πm (θm ) dθm π(M = m) Θm If S(x) sufficient statistic for the joint parameters (M, θ0 , . . . , θM−1 ), P(M = m|x) = P(M = m|S(x)) .
  • 81. Model index Computational target: P(M = m|x) ∝ fm (x|θm )πm (θm ) dθm π(M = m) Θm If S(x) sufficient statistic for the joint parameters (M, θ0 , . . . , θM−1 ), P(M = m|x) = P(M = m|S(x)) .
  • 82. Sufficient statistics in Gibbs random fields Each model m has its own sufficient statistic Sm (·) and S(·) = (S0 (·), . . . , SM−1 (·)) is also (model-)sufficient. Explanation: For Gibbs random fields, 1 2 x|M = m ∼ fm (x|θm ) = fm (x|S(x))fm (S(x)|θm ) 1 f 2 (S(x)|θm ) = n(S(x)) m where ˜ ˜ n(S(x)) = {x ∈ X : S(x) = S(x)} c S(x) is sufficient for the joint parameters
  • 83. Sufficient statistics in Gibbs random fields Each model m has its own sufficient statistic Sm (·) and S(·) = (S0 (·), . . . , SM−1 (·)) is also (model-)sufficient. Explanation: For Gibbs random fields, 1 2 x|M = m ∼ fm (x|θm ) = fm (x|S(x))fm (S(x)|θm ) 1 = f 2 (S(x)|θm ) n(S(x)) m where ˜ ˜ n(S(x)) = {x ∈ X : S(x) = S(x)} c S(x) is sufficient for the joint parameters
  • 84. Toy example iid Bernoulli model versus two-state first-order Markov chain, i.e. n f0 (x|θ0 ) = exp θ0 I{xi =1} {1 + exp(θ0 )}n , i=1 versus f1 (x|θ1 ) = 1 exp θ1 2 n I{xi =xi−1 } {1 + exp(θ1 )}n−1 , i=2 with priors θ0 ∼ U(−5, 5) and θ1 ∼ U(0, 6) (inspired by “phase transition” boundaries).
  • 85. About sufficiency If η1 (x) sufficient statistic for model m = 1 and parameter θ1 and η2 (x) sufficient statistic for model m = 2 and parameter θ2 , (η1 (x), η2 (x)) is not always sufficient for (m, θm ) c Potential loss of information at the testing level
  • 86. About sufficiency If η1 (x) sufficient statistic for model m = 1 and parameter θ1 and η2 (x) sufficient statistic for model m = 2 and parameter θ2 , (η1 (x), η2 (x)) is not always sufficient for (m, θm ) c Potential loss of information at the testing level
  • 87. Poisson/geometric example Sample x = (x1 , . . . , xn ) from either a Poisson P(λ) or from a geometric G(p) Sum n S= xi = η(x) i=1 sufficient statistic for either model but not simultaneously
  • 88. Limiting behaviour of B12 (T → ∞) ABC approximation B12 (y) = T t=1 Imt =1 Iρ{η(zt ),η(y)} T t=1 Imt =2 Iρ{η(zt ),η(y)} , where the (mt , z t )’s are simulated from the (joint) prior As T goes to infinity, limit B12 (y) = = Iρ{η(z),η(y)} Iρ{η(z),η(y)} Iρ{η,η(y)} Iρ{η,η(y)} π1 (θ1 )f1 (z|θ1 ) dz dθ1 π2 (θ2 )f2 (z|θ2 ) dz dθ2 π1 (θ1 )f1η (η|θ1 ) dη dθ1 , π2 (θ2 )f2η (η|θ2 ) dη dθ2 where f1η (η|θ1 ) and f2η (η|θ2 ) distributions of η(z)
  • 89. Limiting behaviour of B12 (T → ∞) ABC approximation B12 (y) = T t=1 Imt =1 Iρ{η(zt ),η(y)} T t=1 Imt =2 Iρ{η(zt ),η(y)} , where the (mt , z t )’s are simulated from the (joint) prior As T goes to infinity, limit B12 (y) = = Iρ{η(z),η(y)} Iρ{η(z),η(y)} Iρ{η,η(y)} Iρ{η,η(y)} π1 (θ1 )f1 (z|θ1 ) dz dθ1 π2 (θ2 )f2 (z|θ2 ) dz dθ2 π1 (θ1 )f1η (η|θ1 ) dη dθ1 , π2 (θ2 )f2η (η|θ2 ) dη dθ2 where f1η (η|θ1 ) and f2η (η|θ2 ) distributions of η(z)
  • 90. Limiting behaviour of B12 ( → 0) When goes to zero, η B12 (y) = π1 (θ1 )f1η (η(y)|θ1 ) dθ1 π2 (θ2 )f2η (η(y)|θ2 ) dθ2 c Bayes factor based on the sole observation of η(y)
  • 91. Limiting behaviour of B12 ( → 0) When goes to zero, η B12 (y) = π1 (θ1 )f1η (η(y)|θ1 ) dθ1 π2 (θ2 )f2η (η(y)|θ2 ) dθ2 c Bayes factor based on the sole observation of η(y)
  • 92. Limiting behaviour of B12 (under sufficiency) If η(y) sufficient statistic in both models, fi (y|θi ) = gi (y)fi η (η(y)|θi ) Thus = Θ1 π(θ1 )g1 (y)f1η (η(y)|θ1 ) dθ1 Θ2 B12 (y) = π(θ2 )g2 (y)f2η (η(y)|θ2 ) dθ2 g1 (y) g2 (y) π1 (θ1 )f1η (η(y)|θ1 ) dθ1 g1 (y) η = B (y) . g2 (y) 12 π2 (θ2 )f2η (η(y)|θ2 ) dθ2 [Didelot, Everitt, Johansen & Lawson, 2011] c No discrepancy only when cross-model sufficiency
  • 93. Limiting behaviour of B12 (under sufficiency) If η(y) sufficient statistic in both models, fi (y|θi ) = gi (y)fi η (η(y)|θi ) Thus = Θ1 π(θ1 )g1 (y)f1η (η(y)|θ1 ) dθ1 Θ2 B12 (y) = π(θ2 )g2 (y)f2η (η(y)|θ2 ) dθ2 g1 (y) g2 (y) π1 (θ1 )f1η (η(y)|θ1 ) dθ1 g1 (y) η = B (y) . g2 (y) 12 π2 (θ2 )f2η (η(y)|θ2 ) dθ2 [Didelot, Everitt, Johansen & Lawson, 2011] c No discrepancy only when cross-model sufficiency
  • 94. Poisson/geometric example (back) Sample x = (x1 , . . . , xn ) from either a Poisson P(λ) or from a geometric G(p) Discrepancy ratio S!n−S / i xi ! g1 (x) = g2 (x) 1 n+S−1 S
  • 95. Poisson/geometric discrepancy η Range of B12 (x) versus B12 (x): The values produced have nothing in common.
  • 96. Formal recovery Creating an encompassing exponential family f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θT η1 (x) + θT η2 (x) + α1 t1 (x) + α2 t2 (x)} 1 2 leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x)) [Didelot, Everitt, Johansen & Lawson, 2011]
  • 97. Formal recovery Creating an encompassing exponential family f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θT η1 (x) + θT η2 (x) + α1 t1 (x) + α2 t2 (x)} 1 2 leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x)) [Didelot, Everitt, Johansen & Lawson, 2011] In the Poisson/geometric case, if discrepancy i xi ! is added to S, no
  • 98. Formal recovery Creating an encompassing exponential family f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θT η1 (x) + θT η2 (x) + α1 t1 (x) + α2 t2 (x)} 1 2 leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x)) [Didelot, Everitt, Johansen & Lawson, 2011] Only applies in genuine sufficiency settings... c Inability to evaluate information loss due to summary statistics
  • 99. Meaning of the ABC-Bayes factor The ABC approximation to the Bayes Factor is based solely on the summary statistics.... In the Poisson/geometric case, if E[yi ] = θ0 > 0, η lim B12 (y) = n→∞ (θ0 + 1)2 −θ0 e θ0
  • 100. Meaning of the ABC-Bayes factor The ABC approximation to the Bayes Factor is based solely on the summary statistics.... In the Poisson/geometric case, if E[yi ] = θ0 > 0, η lim B12 (y) = n→∞ (θ0 + 1)2 −θ0 e θ0
  • 101. ABC model choice consistency Intractable likelihoods ABC methods ABC as an inference machine ABC for model choice Model choice consistency Formalised framework Consistency results Summary statistics Conclusion and perspectives
  • 102. The starting point Central question to the validation of ABC for model choice: When is a Bayes factor based on an insufficient statistic T(y) consistent? T Note/warnin: c drawn on T(y) through B12 (y) necessarily differs from c drawn on y through B12 (y) [Marin, Pillai, X, & Rousseau, JRSS B, 2013]
  • 103. The starting point Central question to the validation of ABC for model choice: When is a Bayes factor based on an insufficient statistic T(y) consistent? T Note/warnin: c drawn on T(y) through B12 (y) necessarily differs from c drawn on y through B12 (y) [Marin, Pillai, X, & Rousseau, JRSS B, 2013]
  • 104. A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1 : y ∼ N(θ1 , 1) opposed √ to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2 √ and scale parameter 1/ 2 (variance one).
  • 105. A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1 : y ∼ N(θ1 , 1) opposed √ to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2 √ and scale parameter 1/ 2 (variance one). Four possible statistics 1. sample mean y (sufficient for M1 if not M2 ); 2. sample median med(y) (insufficient); 3. sample variance var(y) (ancillary); 4. median absolute deviation mad(y) = med(|y − med(y)|);
  • 106. A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1 : y ∼ N(θ1 , 1) opposed √ to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2 √ and scale parameter 1/ 2 (variance one). 0.3 0.4 0.5 0.6 0.7 n=100 q q q 0.1 0.2 q q q q q 0.0 q q q Gauss Laplace
  • 107. A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1 : y ∼ N(θ1 , 1) opposed √ to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2 √ and scale parameter 1/ 2 (variance one). n=100 0.7 1.0 n=100 q 0.6 q 0.8 q q 0.5 q q q q q q q 0.4 q q q q 0.1 0.2 q q 0.2 0.3 0.4 0.6 q q q q q q 0.0 q q q Gauss Laplace 0.0 q Gauss Laplace
  • 108. Framework Starting from sample y = (y1 , . . . , yn ) the observed sample, not necessarily iid with true distribution y ∼ Pn Summary statistics T(y) = Tn = (T1 (y), T2 (y), · · · , Td (y)) ∈ Rd with true distribution Tn ∼ Gn .
  • 109. Framework c Comparison of – under M1 , y ∼ F1,n (·|θ1 ) where θ1 ∈ Θ1 ⊂ Rp1 – under M2 , y ∼ F2,n (·|θ2 ) where θ2 ∈ Θ2 ⊂ Rp2 turned into – under M1 , T(y) ∼ G1,n (·|θ1 ), and θ1 |T(y) ∼ π1 (·|Tn ) – under M2 , T(y) ∼ G2,n (·|θ2 ), and θ2 |T(y) ∼ π2 (·|Tn )
  • 110. Assumptions A collection of asymptotic “standard” assumptions: [A1] is a standard central limit theorem under the true model [A2] controls the large deviations of the estimator Tn from the estimand µ(θ) [A3] is the standard prior mass condition found in Bayesian asymptotics (di effective dimension of the parameter) [A4] restricts the behaviour of the model density against the true density [Think CLT!]
  • 111. Assumptions A collection of asymptotic “standard” assumptions: [Think CLT!] [A1] There exist a sequence {vn } converging to +∞, a distribution Q, a symmetric, d × d positive definite matrix V0 and a vector µ0 ∈ Rd such that −1/2 vn V0 (Tn − µ0 ) n→∞ Q, under Gn
  • 112. Assumptions A collection of asymptotic “standard” assumptions: [Think CLT!] [A2] For i = 1, 2, there exist sets Fn,i ⊂ Θi , functions µi (θi ) and constants i , τi , αi > 0 such that for all τ > 0, sup θi ∈Fn,i Gi,n |Tn − µi (θi )| > τ|µi (θi ) − µ0 | ∧ (|τµi (θi ) − µ0 | ∧ −αi i) with c −τ πi (Fn,i ) = o(vn i ). i |θi −α vn i
  • 113. Assumptions A collection of asymptotic “standard” assumptions: [Think CLT!] [A3] If inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0, defining (u > 0) Sn,i (u) = θi ∈ Fn,i ; |µi (θi ) − µ0 | −1 u vn , there exist constants di < τi ∧ αi − 1 such that −d πi (Sn,i (u)) ∼ u di vn i , ∀u vn
  • 114. Assumptions A collection of asymptotic “standard” assumptions: [Think CLT!] [A4] If inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0, for any U, δ > 0 and (En )n such that, if θi ∈ Sn,i (U) En ⊂ {t; gi (t|θi ) < δgn (t)} > 0, there exist c and Gn (En ) < .
  • 115. Assumptions A collection of asymptotic “standard” assumptions: [Think CLT!] Again (sumup) [A1] is a standard central limit theorem under the true model [A2] controls the large deviations of the estimator Tn from the estimand µ(θ) [A3] is the standard prior mass condition found in Bayesian asymptotics (di effective dimension of the parameter) [A4] restricts the behaviour of the model density against the true density
  • 116. Effective dimension Understanding di in [A3]: defined only when µ0 ∈ {µi (θi ), θi ∈ Θi }, πi (θi : |µi (θi ) − µ0 | < n−1/2 ) = O(n−di /2 ) is the effective dimension of the model Θi around µ0
  • 117. Effective dimension Understanding di in [A3]: when inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0, mi (Tn ) −d ∼ vn i , gn (Tn ) thus log(mi (Tn )/gn (Tn )) ∼ −di log vn −d and vn i penalization factor resulting from integrating θi out (see effective number of parameters in DIC)
  • 118. Effective dimension Understanding di in [A3]: In regular models, di dimension of T(Θi ), leading to BIC In non-regular models, di can be smaller
  • 119. Asymptotic marginals Asymptotically, under [A1]–[A4] mi (t) = gi (t|θi ) πi (θi ) dθi Θi is such that (i) if inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0, d−d Cl v n i mi (Tn ) d−d Cu vn i and (ii) if inf{|µi (θi ) − µ0 |; θi ∈ Θi } > 0 d−τ d−α mi (Tn ) = oPn [vn i + vn i ].
  • 120. Within-model consistency Under same assumptions, if inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0, the posterior distribution of µi (θi ) given Tn is consistent at rate 1/vn provided αi ∧ τi > di .
  • 121. Between-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value of Tn under both models. And only by this mean value!
  • 122. Between-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value of Tn under both models. And only by this mean value! Indeed, if inf{|µ0 − µ2 (θ2 )|; θ2 ∈ Θ2 } = inf{|µ0 − µ1 (θ1 )|; θ1 ∈ Θ1 } = 0 then −(d1 −d2 ) Cl vn m1 (Tn ) m2 (Tn ) −(d1 −d2 ) Cu vn , where Cl , Cu = OPn (1), irrespective of the true model. c Only depends on the difference d1 − d2 : no consistency
  • 123. Between-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value of Tn under both models. And only by this mean value! Else, if inf{|µ0 − µ2 (θ2 )|; θ2 ∈ Θ2 } > inf{|µ0 − µ1 (θ1 )|; θ1 ∈ Θ1 } = 0 then m1 (Tn ) m2 (Tn ) −(d1 −α2 ) Cu min vn −(d1 −τ2 ) , vn
  • 124. Consistency theorem If inf{|µ0 − µ2 (θ2 )|; θ2 ∈ Θ2 } = inf{|µ0 − µ1 (θ1 )|; θ1 ∈ Θ1 } = 0, Bayes factor −(d1 −d2 ) T B12 = O(vn ) irrespective of the true model. It is inconsistent since it always picks the model with the smallest dimension
  • 125. Consistency theorem If Pn belongs to one of the two models and if µ0 cannot be attained by the other one : 0 = min (inf{|µ0 − µi (θi )|; θi ∈ Θi }, i = 1, 2) < max (inf{|µ0 − µi (θi )|; θi ∈ Θi }, i = 1, 2) , T then the Bayes factor B12 is consistent
  • 126. Consequences on summary statistics Bayes factor driven by the means µi (θi ) and the relative position of µ0 wrt both sets {µi (θi ); θi ∈ Θi }, i = 1, 2. For ABC, this implies the simplest summary statistics Tn are ancillary statistics with different mean values under both models Else, if Tn asymptotically depends on some of the parameters of the models, it is possible that there exists θi ∈ Θi such that µi (θi ) = µ0 even though model M1 is misspecified
  • 127. Consequences on summary statistics Bayes factor driven by the means µi (θi ) and the relative position of µ0 wrt both sets {µi (θi ); θi ∈ Θi }, i = 1, 2. For ABC, this implies the simplest summary statistics Tn are ancillary statistics with different mean values under both models Else, if Tn asymptotically depends on some of the parameters of the models, it is possible that there exists θi ∈ Θi such that µi (θi ) = µ0 even though model M1 is misspecified
  • 128. Toy example: Laplace versus Gauss [1] If n Tn = n−1 Xi4 , µ1 (θ) = 3 + θ4 + 6θ2 , µ2 (θ) = 6 + · · · i=1 and the true distribution is Laplace with mean θ0 = 1, under the √ Gaussian model the value θ∗ = 2 3 − 3 also leads to µ0 = µ(θ∗ ) [here d1 = d2 = d = 1]
  • 129. Toy example: Laplace versus Gauss [1] If n n T =n Xi4 , −1 µ1 (θ) = 3 + θ4 + 6θ2 , µ2 (θ) = 6 + · · · i=1 and the true distribution is Laplace with mean θ0 = 1, under the √ Gaussian model the value θ∗ = 2 3 − 3 also leads to µ0 = µ(θ∗ ) [here d1 = d2 = d = 1] c A Bayes factor associated with such a statistic is inconsistent
  • 130. Toy example: Laplace versus Gauss [1] If n Tn = n−1 Xi4 , i=1 µ1 (θ) = 3 + θ4 + 6θ2 , µ2 (θ) = 6 + · · ·
  • 131. Toy example: Laplace versus Gauss [1] If n Tn = n−1 Xi4 , µ1 (θ) = 3 + θ4 + 6θ2 , i=1 q 0.35 0.40 0.45 n=1000 q q 0.30 q q M1 M2 Fourth moment µ2 (θ) = 6 + · · ·
  • 132. Toy example: Laplace versus Gauss [0] When (4) (6) ¯ ¯ T(y) = yn , yn and the true distribution is Laplace with mean θ0 = 0, then √ µ0 = 6, µ1 (θ∗ ) = 6 with θ∗ = 2 3 − 3 1 1 [d1 = 1 and d2 = 1/2] thus B12 ∼ n−1/4 → 0 : consistent Under the Gaussian model µ0 = 3, µ2 (θ2 ) B12 → +∞ : 6 > 3 ∀θ2 consistent
  • 133. Toy example: Laplace versus Gauss [0] When (4) (6) ¯ ¯ T(y) = yn , yn and the true distribution is Laplace with mean θ0 = 0, then √ µ0 = 6, µ1 (θ∗ ) = 6 with θ∗ = 2 3 − 3 1 1 [d1 = 1 and d2 = 1/2] thus B12 ∼ n−1/4 → 0 : consistent Under the Gaussian model µ0 = 3, µ2 (θ2 ) B12 → +∞ : 6 > 3 ∀θ2 consistent
  • 134. Toy example: Laplace versus Gauss [0] When (4) (6) ¯ ¯ T(y) = yn , yn 0.8 q q q q q 0.6 q 0.6 0.6 0.8 q n=10000 q q q q q q 1.0 q q 0.8 1.0 n=1000 1.0 n=100 q q Gauss Laplace 0.4 0.0 0.2 0.4 0.2 0.0 0.0 0.2 0.4 q q Gauss Laplace Fourth and sixth moments Gauss Laplace
  • 135. Checking for adequate statistics Run a practical check of the relevance (or non-relevance) of Tn null hypothesis that both models are compatible with the statistic Tn H0 : inf{|µ2 (θ2 ) − µ0 |; θ2 ∈ Θ2 } = 0 against H1 : inf{|µ2 (θ2 ) − µ0 |; θ2 ∈ Θ2 } > 0 testing procedure provides estimates of mean of Tn under each model and checks for equality
  • 136. Checking in practice Under each model Mi , generate ABC sample θi,l , l = 1, · · · , L For each θi,l , generate yi,l ∼ Fi,n (·|ψi,l ), derive Tn (yi,l ) and compute L 1 Tn (yi,l ), i = 1, 2 . µi = ^ L l=1 n Conditionally on T (y), √ L(µi − Eπ [µi (θi )|Tn (y)]) ^ N(0, Vi ), Test for a common mean H0 : µ1 ∼ N(µ0 , V1 ) , µ2 ∼ N(µ0 , V2 ) ^ ^ against the alternative of different means H1 : µi ∼ N(µi , Vi ), ^ with µ1 = µ2 .
  • 137. Toy example: Laplace versus Gauss q q 0 10 20 30 40 q q Gauss q q q Laplace Gauss Laplace Normalised χ2 without and with mad
  • 138. A population genetic illustration Two populations (1 and 2) having diverged at a fixed known time in the past and third population (3) which diverged from one of those two populations (models 1 and 2, respectively). Observation of 50 diploid individuals/population genotyped at 5, 50 or 100 independent microsatellite loci. Model 2
  • 139. A population genetic illustration Two populations (1 and 2) having diverged at a fixed known time in the past and third population (3) which diverged from one of those two populations (models 1 and 2, respectively). Observation of 50 diploid individuals/population genotyped at 5, 50 or 100 independent microsatellite loci. Stepwise mutation model: the number of repeats of the mutated gene increases or decreases by one. Mutation rate µ common to all loci set to 0.005 (single parameter) with uniform prior distribution µ ∼ U[0.0001, 0.01]
  • 140. A population genetic illustration Summary statistics associated to the (δµ )2 distance xl,i,j repeated number of allele in locus l = 1, . . . , L for individual i = 1, . . . , 100 within the population j = 1, 2, 3. Then (δµ )21 ,j2 = j 1 L L l=1   1 100 100 xl,i1 ,j1 − i1 =1 1 100 100 2 xl,i2 ,j2  . i2 =1
  • 141. A population genetic illustration For two copies of locus l with allele sizes xl,i,j1 and xl,i ,j2 , most recent common ancestor at coalescence time τj1 ,j2 , gene genealogy distance of 2τj1 ,j2 , hence number of mutations Poisson with parameter 2µτj1 ,j2 . Therefore, xl,i,j1 − xl,i E 2 ,j2 |τj1 ,j2 = 2µτj1 ,j2 and Model 1 Model 2 (δµ )2 1,2 2µ1 t 2µ2 t E (δµ )2 1,3 2µ1 t 2µ2 t E (δµ )2 2,3 2µ1 t 2µ2 t E
  • 142. A population genetic illustration Thus, Bayes factor based only on distance (δµ )2 not convergent: if 1,2 µ1 = µ2 , same expectation Bayes factor based only on distance (δµ )2 or (δµ )2 not 1,3 2,3 convergent: if µ1 = 2µ2 or 2µ1 = µ2 same expectation if two of the three distances are used, Bayes factor converges: there is no (µ1 , µ2 ) for which all expectations are equal
  • 143. A population genetic illustration 0.8 DM2(12) q 50 100 0.0 0.4 q q 5 0.8 DM2(13) 0.4 q q q q q q q q q q q 0.0 q 5 50 100 DM2(13) & DM2(23) 0.8 q q q q q q q q q q q q q q 0.4 q q q q 0.0 q q q q q 5 50 100 Posterior probabilities that the data is from model 1 for 5, 50 and 100 loci
  • 144. A population genetic illustration 20 0.2 40 0.4 60 80 0.6 100 0.8 120 1.0 140 p−values 0 0.0 q q q DM2(12) DM2(13) & DM2(23) q DM2(12) DM2(13) & DM2(23)
  • 145. Embedded models When M1 submodel of M2 , and if the true distribution belongs to the smaller model M1 , Bayes factor is of order −(d1 −d2 ) vn
  • 146. Embedded models When M1 submodel of M2 , and if the true distribution belongs to the smaller model M1 , Bayes factor is of order −(d1 −d2 ) vn If summary statistic only informative on a parameter that is the same under both models, i.e if d1 = d2 , then c the Bayes factor is not consistent
  • 147. Embedded models When M1 submodel of M2 , and if the true distribution belongs to the smaller model M1 , Bayes factor is of order −(d1 −d2 ) vn Else, d1 < d2 and Bayes factor is going to ∞ under M1 . If true distribution not in M1 , then c Bayes factor is consistent only if µ1 = µ2 = µ0
  • 148. Summary Model selection feasible with ABC Choice of summary statistics is paramount At best, ABC → π(. | T(y)) which concentrates around µ0 For consistent estimation : {θ; µ(θ) = µ0 } = θ0 For consistent testing {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
  • 149. Summary Model selection feasible with ABC Choice of summary statistics is paramount At best, ABC → π(. | T(y)) which concentrates around µ0 For consistent estimation : {θ; µ(θ) = µ0 } = θ0 For consistent testing {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
  • 150. Conclusion/perspectives abc part of a wider picture to handle complex/Big Data models, able to start from rudimentary machine learning summaries many formats of empirical [likelihood] Bayes methods available lack of comparative tools and of an assessment for information loss