SlideShare a Scribd company logo
1 of 230
Download to read offline
Approximate Bayesian Computation (ABC):
    model choice and empirical likelihood

                  Christian P. Robert
ICERM, “Computational Challenges in Probability”, Nov. 28,
                          2012

              Universit´ Paris-Dauphine, IuF, & CREST
                       e
            Joint works with J.-M. Cornuet, J.-M. Marin,
         K.L. Mengersen, N. Pillai, P. Pudlo and J. Rousseau


                bayesianstatistics@gmail.com
Advertisement



   MCMSki IV to be held in Chamonix Mt Blanc, France, from
   Monday, Jan. 6 to Wed., Jan. 8, 2014
   All aspects of MCMC++ theory and methodology
   Parallel (invited and contributed) sessions: call for proposals on
   website http://www.pages.drexel.edu/ mwl25/mcmski/
Outline



Introduction

ABC

ABC as an inference machine

ABC for model choice

Model choice consistency

ABCel
Intractable likelihood



   Case of a well-defined statistical model where the likelihood
   function
                         (θ|y) = f (y1 , . . . , yn |θ)


       is (really!) not available in closed form
       can (easily!) be neither completed nor demarginalised
       cannot be estimated by an unbiased estimator
    c Prohibits direct implementation of a generic MCMC algorithm
   like Metropolis–Hastings
Intractable likelihood



   Case of a well-defined statistical model where the likelihood
   function
                         (θ|y) = f (y1 , . . . , yn |θ)


       is (really!) not available in closed form
       can (easily!) be neither completed nor demarginalised
       cannot be estimated by an unbiased estimator
    c Prohibits direct implementation of a generic MCMC algorithm
   like Metropolis–Hastings
Different perspectives on abc



   What is the (most) fundamental issue?
       a mere computational issue (that will eventually end up being
       solved by more powerful computers, &tc, even if too costly in
       the short term)
       an inferential issue (opening opportunities for new inference
       machine, with different legitimity than classical B approach)
       a Bayesian conundrum (while inferential methods available,
       how closely related to the B approach?)
Different perspectives on abc



   What is the (most) fundamental issue?
       a mere computational issue (that will eventually end up being
       solved by more powerful computers, &tc, even if too costly in
       the short term)
       an inferential issue (opening opportunities for new inference
       machine, with different legitimity than classical B approach)
       a Bayesian conundrum (while inferential methods available,
       how closely related to the B approach?)
Different perspectives on abc



   What is the (most) fundamental issue?
       a mere computational issue (that will eventually end up being
       solved by more powerful computers, &tc, even if too costly in
       the short term)
       an inferential issue (opening opportunities for new inference
       machine, with different legitimity than classical B approach)
       a Bayesian conundrum (while inferential methods available,
       how closely related to the B approach?)
Econom’ections


   Similar exploration of simulation-based and approximation
   techniques in Econometrics
       Simulated method of moments
       Method of simulated moments
       Simulated pseudo-maximum-likelihood
       Indirect inference
                                       [Gouri´roux & Monfort, 1996]
                                             e

   even though motivation is partly-defined models rather than
   complex likelihoods
Econom’ections


   Similar exploration of simulation-based and approximation
   techniques in Econometrics
       Simulated method of moments
       Method of simulated moments
       Simulated pseudo-maximum-likelihood
       Indirect inference
                                       [Gouri´roux & Monfort, 1996]
                                             e

   even though motivation is partly-defined models rather than
   complex likelihoods
Indirect inference




                                                 ^
   Minimise [in θ] a distance between estimators β based on a
   pseudo-model for genuine observations and for observations
   simulated under the true model and the parameter θ.

                             [Gouri´roux, Monfort, & Renault, 1993;
                                   e
                             Smith, 1993; Gallant & Tauchen, 1996]
Indirect inference (PML vs. PSE)


   Example of the pseudo-maximum-likelihood (PML)

                ^
                β(y) = arg max          log f (yt |β, y1:(t−1) )
                               β
                                    t


   leading to

                arg min ||β(yo ) − β(y1 (θ), . . . , yS (θ))||2
                          ^        ^
                     θ

   when
                    ys (θ) ∼ f (y|θ)        s = 1, . . . , S
Indirect inference (PML vs. PSE)


   Example of the pseudo-score-estimator (PSE)
                                                                      2
                                         ∂ log f
                ^
                β(y) = arg min                   (yt |β, y1:(t−1) )
                              β
                                     t
                                            ∂β

   leading to

                   arg min ||β(yo ) − β(y1 (θ), . . . , yS (θ))||2
                             ^        ^
                        θ

   when
                       ys (θ) ∼ f (y|θ)        s = 1, . . . , S
Consistent indirect inference



           “...in order to get a unique solution the dimension of
       the auxiliary parameter β must be larger than or equal to
       the dimension of the initial parameter θ. If the problem is
       just identified the different methods become easier...”

   Consistency depending on the criterion and on the asymptotic
   identifiability of θ
                                [Gouri´roux & Monfort, 1996, p. 66]
                                       e
Consistent indirect inference



           “...in order to get a unique solution the dimension of
       the auxiliary parameter β must be larger than or equal to
       the dimension of the initial parameter θ. If the problem is
       just identified the different methods become easier...”

   Consistency depending on the criterion and on the asymptotic
   identifiability of θ
                                [Gouri´roux & Monfort, 1996, p. 66]
                                       e
Choice of pseudo-model




   Arbitrariness of pseudo-model
   Pick model such that
        ^
    1. β(θ) not flat (i.e. sensitive to changes in θ)
        ^
    2. β(θ) not dispersed (i.e. robust agains changes in ys (θ))
                                          [Frigessi & Heggland, 2004]
Approximate Bayesian computation

 Introduction

 ABC
   Genesis of ABC
   ABC basics
   Advances and interpretations
   ABC as knn

 ABC as an inference machine

 ABC for model choice

 Model choice consistency

 ABCel
Genetic background of ABC



    skip genetics


   ABC is a recent computational technique that only requires being
   able to sample from the likelihood f (·|θ)
   This technique stemmed from population genetics models, about
   15 years ago, and population geneticists still contribute
   significantly to methodological developments of ABC.
                             [Griffith & al., 1997; Tavar´ & al., 1999]
                                                          e
Demo-genetic inference



   Each model is characterized by a set of parameters θ that cover
   historical (time divergence, admixture time ...), demographics
   (population sizes, admixture rates, migration rates, ...) and genetic
   (mutation rate, ...) factors
   The goal is to estimate these parameters from a dataset of
   polymorphism (DNA sample) y observed at the present time

   Problem:
   most of the time, we cannot calculate the likelihood of the
   polymorphism data f (y|θ)...
Demo-genetic inference



   Each model is characterized by a set of parameters θ that cover
   historical (time divergence, admixture time ...), demographics
   (population sizes, admixture rates, migration rates, ...) and genetic
   (mutation rate, ...) factors
   The goal is to estimate these parameters from a dataset of
   polymorphism (DNA sample) y observed at the present time

   Problem:
   most of the time, we cannot calculate the likelihood of the
   polymorphism data f (y|θ)...
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium



                                     Mutations according to
                                     the Simple stepwise
                                     Mutation Model
                                     (SMM)
                                     • date of the mutations ∼
                                     Poisson process with
                                     intensity θ/2 over the
                                     branches
                                     • MRCA = 100
                                     • independent mutations:
                                     ±1 with pr. 1/2
     Sample of 8 genes
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
                                     Kingman’s genealogy
                                     When time axis is
                                     normalized,
                                     T (k) ∼ Exp(k(k − 1)/2)

                                     Mutations according to
                                     the Simple stepwise
                                     Mutation Model
                                     (SMM)
                                     • date of the mutations ∼
                                     Poisson process with
                                     intensity θ/2 over the
                                     branches
                                     • MRCA = 100
                                     • independent mutations:
                                     ±1 with pr. 1/2
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
                                     Kingman’s genealogy
                                     When time axis is
                                     normalized,
                                     T (k) ∼ Exp(k(k − 1)/2)

                                     Mutations according to
                                     the Simple stepwise
                                     Mutation Model
                                     (SMM)
                                     • date of the mutations ∼
                                     Poisson process with
                                     intensity θ/2 over the
                                     branches
                                     • MRCA = 100
                                     • independent mutations:
                                     ±1 with pr. 1/2
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
                                     Kingman’s genealogy
                                     When time axis is
                                     normalized,
                                     T (k) ∼ Exp(k(k − 1)/2)

                                     Mutations according to
                                     the Simple stepwise
                                     Mutation Model
                                     (SMM)
                                     • date of the mutations ∼
                                     Poisson process with
                                     intensity θ/2 over the
                                     branches
 Observations: leafs of the tree
                                     • MRCA = 100
                  ^
                  θ=?
                                     • independent mutations:
                                     ±1 with pr. 1/2
Much more interesting models. . .

       several independent locus
       Independent gene genealogies and mutations
       different populations
       linked by an evolutionary scenario made of divergences,
       admixtures, migrations between populations, etc.
       larger sample size
       usually between 50 and 100 genes

                                               MRCA
                                                                 τ2
                                                                 τ1


   A typical evolutionary scenario:   POP 0    POP 1   POP 2
Intractable likelihood



   Missing (too missing!) data structure:

                     f (y|θ) =         f (y|G , θ)f (G |θ)dG
                                   G

   cannot be computed in a manageable way...
   The genealogies are considered as nuisance parameters
       This modelling clearly differs from the phylogenetic perspective
       where the tree is the parameter of interest.
Intractable likelihood



   Missing (too missing!) data structure:

                     f (y|θ) =         f (y|G , θ)f (G |θ)dG
                                   G

   cannot be computed in a manageable way...
   The genealogies are considered as nuisance parameters
       This modelling clearly differs from the phylogenetic perspective
       where the tree is the parameter of interest.
a dubious ancestry...




                        You went to school to learn, girl (. . . )
                        Why 2 plus 2 makes four
                        Now, now, now, I’m gonna teach you (. . . )

                        All you gotta do is repeat after me!
                        A, B, C!
                        It’s easy as 1, 2, 3!
                        Or simple as Do, Re, Mi! (. . . )
A?B?C?




    A stands for approximate
    [wrong likelihood /
    picture]
    B stands for Bayesian
    C stands for computation
    [producing a parameter
    sample]
A?B?C?




    A stands for approximate
    [wrong likelihood /
    picture]
    B stands for Bayesian
    C stands for computation
    [producing a parameter
    sample]
A?B?C?



                                                              ESS=155.6                                                                     ESS=75.93                                                                ESS=76.87




                                                                                                                                                                                     4
                                                                                                          2.0




                                                                                                                                                                                     3
                               Density




                                                                                                Density




                                                                                                                                                                           Density
                                         1.0




                                                                                                                                                                                     2
                                                                                                          1.0




                                                                                                                                                                                     1
                                         0.0




                                                                                                          0.0




                                                                                                                                                                                     0
    A stands for approximate                    −0.5           0.0

                                                                  θ
                                                              ESS=91.54
                                                                             0.5          1.0                               −0.4     −0.2     0.0

                                                                                                                                                θ
                                                                                                                                            ESS=108.4
                                                                                                                                                     0.2       0.4                                       −0.4        −0.2

                                                                                                                                                                                                                         θ
                                                                                                                                                                                                                     ESS=85.13
                                                                                                                                                                                                                                   0.0              0.2




                                         4




                                                                                                          0.0 1.0 2.0 3.0




                                                                                                                                                                                     0.0 1.0 2.0 3.0
                                         3
    [wrong likelihood /




                               Density




                                                                                                Density




                                                                                                                                                                           Density
                                         2
                                         1
                                         0
    picture]                                   −0.6    −0.4     −0.2

                                                                  θ
                                                              ESS=149.1
                                                                             0.0    0.2                                       −0.4            0.0

                                                                                                                                                θ
                                                                                                                                            ESS=96.31
                                                                                                                                                     0.2     0.4     0.6                                  −0.2       0.0

                                                                                                                                                                                                                         θ
                                                                                                                                                                                                                     ESS=83.77
                                                                                                                                                                                                                                 0.2      0.4             0.6




                                                                                                                                                                                     4
                                         2.0




                                                                                                                                                                                     3
                                                                                                          2.0
                               Density




                                                                                                Density




                                                                                                                                                                           Density

                                                                                                                                                                                     2
                                         1.0




                                                                                                          1.0
    B stands for Bayesian




                                                                                                                                                                                     1
                                         0.0




                                                                                                          0.0




                                                                                                                                                                                     0
                                               −0.5           0.0            0.5          1.0                                 −0.4            0.0    0.2     0.4     0.6                               −0.6   −0.4    −0.2        0.0         0.2         0.4

                                                                  θ
                                                              ESS=155.7                                                                         θ
                                                                                                                                            ESS=92.42                                                                    θ
                                                                                                                                                                                                                     ESS=95.01




                                                                                                          0.0 1.0 2.0 3.0
                                         2.0




                                                                                                                                                                                     3.0
    C stands for computation


                               Density




                                                                                                Density




                                                                                                                                                                           Density
                                         1.0




                                                                                                                                                                                     1.5
                                         0.0




                                                                                                                                                                                     0.0
    [producing a parameter                             −0.5

                                                                  θ
                                                                     0.0

                                                              ESS=139.2
                                                                                   0.5                                        −0.4            0.0

                                                                                                                                                θ
                                                                                                                                            ESS=99.33
                                                                                                                                                     0.2     0.4     0.6                                 −0.4

                                                                                                                                                                                                                         θ
                                                                                                                                                                                                                           0.0

                                                                                                                                                                                                                     ESS=87.28
                                                                                                                                                                                                                                   0.2        0.4         0.6




                                         2.0




                                                                                                          0.0 1.0 2.0 3.0




                                                                                                                                                                                     3
    sample]
                               Density




                                                                                                Density




                                                                                                                                                                           Density

                                                                                                                                                                                     2
                                         1.0




                                                                                                                                                                                     1
                                         0.0




                                                                                                                                                                                     0
                                               −0.6      −0.2          0.2         0.6                                         −0.4    −0.2    0.0     0.2     0.4                                        −0.2   0.0        0.2         0.4         0.6
How Bayesian is aBc?




  Could we turn the resolution into a Bayesian answer?
      ideally so (not meaningfull: requires ∞-ly powerful computer
      asymptotically so (when sample size goes to ∞: meaningfull?)
      approximation error unknown (w/o costly simulation)
      true Bayes for wrong model (formal and artificial)
      true Bayes for estimated likelihood (back to econometrics?)
Untractable likelihood



Back to stage zero: what can we do
when a likelihood function f (y|θ) is
well-defined but impossible / too
costly to compute...?
    MCMC cannot be implemented!
    shall we give up Bayesian
    inference altogether?!
    or settle for an almost Bayesian
    inference/picture...?
Untractable likelihood



Back to stage zero: what can we do
when a likelihood function f (y|θ) is
well-defined but impossible / too
costly to compute...?
    MCMC cannot be implemented!
    shall we give up Bayesian
    inference altogether?!
    or settle for an almost Bayesian
    inference/picture...?
ABC methodology

  Bayesian setting: target is π(θ)f (x|θ)
  When likelihood f (x|θ) not in closed form, likelihood-free rejection
  technique:
  Foundation
  For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps
  jointly simulating
                       θ ∼ π(θ) , z ∼ f (z|θ ) ,
  until the auxiliary variable z is equal to the observed value, z = y,
  then the selected
                                θ ∼ π(θ|y)

          [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997]
                                                     e
ABC methodology

  Bayesian setting: target is π(θ)f (x|θ)
  When likelihood f (x|θ) not in closed form, likelihood-free rejection
  technique:
  Foundation
  For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps
  jointly simulating
                       θ ∼ π(θ) , z ∼ f (z|θ ) ,
  until the auxiliary variable z is equal to the observed value, z = y,
  then the selected
                                θ ∼ π(θ|y)

          [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997]
                                                     e
ABC methodology

  Bayesian setting: target is π(θ)f (x|θ)
  When likelihood f (x|θ) not in closed form, likelihood-free rejection
  technique:
  Foundation
  For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps
  jointly simulating
                       θ ∼ π(θ) , z ∼ f (z|θ ) ,
  until the auxiliary variable z is equal to the observed value, z = y,
  then the selected
                                θ ∼ π(θ|y)

          [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997]
                                                     e
A as A...pproximative



   When y is a continuous random variable, strict equality z = y is
   replaced with a tolerance zone

                              ρ(y, z)

   where ρ is a distance
   Output distributed from
                                     def
               π(θ) Pθ {ρ(y, z) < } ∝ π(θ|ρ(y, z) < )

                                              [Pritchard et al., 1999]
A as A...pproximative



   When y is a continuous random variable, strict equality z = y is
   replaced with a tolerance zone

                              ρ(y, z)

   where ρ is a distance
   Output distributed from
                                     def
               π(θ) Pθ {ρ(y, z) < } ∝ π(θ|ρ(y, z) < )

                                              [Pritchard et al., 1999]
ABC algorithm


  In most implementations, further degree of A...pproximation:

  Algorithm 1 Likelihood-free rejection sampler
    for i = 1 to N do
      repeat
         generate θ from the prior distribution π(·)
         generate z from the likelihood f (·|θ )
      until ρ{η(z), η(y)}
      set θi = θ
    end for

  where η(y) defines a (not necessarily sufficient) statistic
Output


  The likelihood-free algorithm samples from the marginal in z of:

                                   π(θ)f (z|θ)IA ,y (z)
                   π (θ, z|y) =                           ,
                                  A ,y ×Θ π(θ)f (z|θ)dzdθ

  where A   ,y   = {z ∈ D|ρ(η(z), η(y)) < }.
  The idea behind ABC is that the summary statistics coupled with a
  small tolerance should provide a good approximation of the
  posterior distribution:

                    π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .


                                                              ...does it?!
Output


  The likelihood-free algorithm samples from the marginal in z of:

                                   π(θ)f (z|θ)IA ,y (z)
                   π (θ, z|y) =                           ,
                                  A ,y ×Θ π(θ)f (z|θ)dzdθ

  where A   ,y   = {z ∈ D|ρ(η(z), η(y)) < }.
  The idea behind ABC is that the summary statistics coupled with a
  small tolerance should provide a good approximation of the
  posterior distribution:

                    π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .


                                                              ...does it?!
Output


  The likelihood-free algorithm samples from the marginal in z of:

                                   π(θ)f (z|θ)IA ,y (z)
                   π (θ, z|y) =                           ,
                                  A ,y ×Θ π(θ)f (z|θ)dzdθ

  where A   ,y   = {z ∈ D|ρ(η(z), η(y)) < }.
  The idea behind ABC is that the summary statistics coupled with a
  small tolerance should provide a good approximation of the
  posterior distribution:

                    π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .


                                                              ...does it?!
Output

  The likelihood-free algorithm samples from the marginal in z of:

                                        π(θ)f (z|θ)IA ,y (z)
                        π (θ, z|y) =                           ,
                                       A ,y ×Θ π(θ)f (z|θ)dzdθ

  where A       ,y   = {z ∈ D|ρ(η(z), η(y)) < }.
  The idea behind ABC is that the summary statistics coupled with a
  small tolerance should provide a good approximation of the
  restricted posterior distribution:

                        π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) .


                                                               Not so good..!
    skip convergence details!
Convergence of ABC


  What happens when                     → 0?
  For B ⊂ Θ, we have

               A         f (z|θ)dz                                       f (z|θ)π(θ)dθ
                    ,y                                                B
                                           π(θ)dθ =                                        dz
   B   A   ,y ×Θ
                 π(θ)f (z|θ)dzdθ                        A   ,y   A   ,y ×Θ
                                                                           π(θ)f (z|θ)dzdθ

                         B   f (z|θ)π(θ)dθ              m(z)
       =                                                               dz
           A   ,y
                                m(z)         A   ,y ×Θ
                                                       π(θ)f (z|θ)dzdθ
                                             m(z)
       =            π(B|z)                                  dz
           A   ,y                 A   ,y ×Θ π(θ)f (z|θ)dzdθ



  which indicates convergence for a continuous π(B|z).
Convergence of ABC


  What happens when                     → 0?
  For B ⊂ Θ, we have

               A         f (z|θ)dz                                       f (z|θ)π(θ)dθ
                    ,y                                                B
                                           π(θ)dθ =                                        dz
   B   A   ,y ×Θ
                 π(θ)f (z|θ)dzdθ                        A   ,y   A   ,y ×Θ
                                                                           π(θ)f (z|θ)dzdθ

                         B   f (z|θ)π(θ)dθ              m(z)
       =                                                               dz
           A   ,y
                                m(z)         A   ,y ×Θ
                                                       π(θ)f (z|θ)dzdθ
                                             m(z)
       =            π(B|z)                                  dz
           A   ,y                 A   ,y ×Θ π(θ)f (z|θ)dzdθ



  which indicates convergence for a continuous π(B|z).
Convergence (do not attempt!)


   ...and the above does not apply to insufficient statistics:
   If η(y) is not a sufficient statistics, the best one can hope for is

                        π(θ|η(y)) ,   not π(θ|y)

   If η(y) is an ancillary statistic, the whole information contained in
   y is lost!, the “best” one can “hope” for is

                            π(θ|η(y)) = π(θ)

                                                               Bummer!!!
Convergence (do not attempt!)


   ...and the above does not apply to insufficient statistics:
   If η(y) is not a sufficient statistics, the best one can hope for is

                        π(θ|η(y)) ,   not π(θ|y)

   If η(y) is an ancillary statistic, the whole information contained in
   y is lost!, the “best” one can “hope” for is

                            π(θ|η(y)) = π(θ)

                                                               Bummer!!!
Convergence (do not attempt!)


   ...and the above does not apply to insufficient statistics:
   If η(y) is not a sufficient statistics, the best one can hope for is

                        π(θ|η(y)) ,   not π(θ|y)

   If η(y) is an ancillary statistic, the whole information contained in
   y is lost!, the “best” one can “hope” for is

                            π(θ|η(y)) = π(θ)

                                                               Bummer!!!
Convergence (do not attempt!)


   ...and the above does not apply to insufficient statistics:
   If η(y) is not a sufficient statistics, the best one can hope for is

                        π(θ|η(y)) ,   not π(θ|y)

   If η(y) is an ancillary statistic, the whole information contained in
   y is lost!, the “best” one can “hope” for is

                            π(θ|η(y)) = π(θ)

                                                               Bummer!!!
MA example


  Inference on the parameters of a MA(q) model
                                       q
                        xt =   t   +         ϑi   t−i          t−i     i.i.d.w.n.
                                       i=1

    bypass MA illustration

  Simple prior: uniform over the inverse [real and complex] roots in
                                                        q
                                   Q(u) = 1 −                 ϑi u i
                                                        i=1

  under the identifiability conditions
MA example




  Inference on the parameters of a MA(q) model
                                       q
                        xt =   t   +         ϑi   t−i   t−i   i.i.d.w.n.
                                       i=1

    bypass MA illustration

  Simple prior: uniform prior over the identifiability zone in the
  parameter space, i.e. triangle for MA(2)
MA example (2)

  ABC algorithm thus made of
    1. picking a new value (ϑ1 , ϑ2 ) in the triangle
    2. generating an iid sequence ( t )−q<t              T
    3. producing a simulated series (xt )1            t T
  Distance: basic distance between the series
                                                         T
               ρ((xt )1   t   T , (xt )1   t   T) =          (xt − xt )2
                                                      t=1

  or distance between summary statistics like the q = 2
  autocorrelations
                                           T
                               τj =            xt xt−j
                                      t=j+1
MA example (2)

  ABC algorithm thus made of
    1. picking a new value (ϑ1 , ϑ2 ) in the triangle
    2. generating an iid sequence ( t )−q<t              T
    3. producing a simulated series (xt )1            t T
  Distance: basic distance between the series
                                                         T
               ρ((xt )1   t   T , (xt )1   t   T) =          (xt − xt )2
                                                      t=1

  or distance between summary statistics like the q = 2
  autocorrelations
                                           T
                               τj =            xt xt−j
                                      t=j+1
Comparison of distance impact




   Impact of tolerance on ABC sample against either distance
   ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
Comparison of distance impact



            4




                                              1.5
            3




                                              1.0
            2




                                              0.5
            1




                                              0.0
            0




                0.0   0.2   0.4   0.6   0.8         −2.0   −1.0    0.0   0.5   1.0   1.5

                             θ1                                   θ2




   Impact of tolerance on ABC sample against either distance
   ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
Comparison of distance impact



            4




                                              1.5
            3




                                              1.0
            2




                                              0.5
            1




                                              0.0
            0




                0.0   0.2   0.4   0.6   0.8         −2.0   −1.0    0.0   0.5   1.0   1.5

                             θ1                                   θ2




   Impact of tolerance on ABC sample against either distance
   ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
Comments




     Role of distance paramount (because     = 0)
     Scaling of components of η(y) is also determinant
       matters little if “small enough”
     representative of “curse of dimensionality”
     small is beautiful!
     the data as a whole may be paradoxically weakly informative
     for ABC
ABC (simul’) advances


                         how approximative is ABC?                ABC as knn

   Simulating from the prior is often poor in efficiency
   Either modify the proposal distribution on θ to increase the density
   of x’s within the vicinity of y ...
        [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

   ...or by viewing the problem as a conditional density estimation
   and by developing techniques to allow for larger
                                               [Beaumont et al., 2002]

   .....or even by including     in the inferential framework [ABCµ ]
                                                     [Ratmann et al., 2009]
ABC (simul’) advances


                         how approximative is ABC?                ABC as knn

   Simulating from the prior is often poor in efficiency
   Either modify the proposal distribution on θ to increase the density
   of x’s within the vicinity of y ...
        [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

   ...or by viewing the problem as a conditional density estimation
   and by developing techniques to allow for larger
                                               [Beaumont et al., 2002]

   .....or even by including     in the inferential framework [ABCµ ]
                                                     [Ratmann et al., 2009]
ABC (simul’) advances


                         how approximative is ABC?                ABC as knn

   Simulating from the prior is often poor in efficiency
   Either modify the proposal distribution on θ to increase the density
   of x’s within the vicinity of y ...
        [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

   ...or by viewing the problem as a conditional density estimation
   and by developing techniques to allow for larger
                                               [Beaumont et al., 2002]

   .....or even by including     in the inferential framework [ABCµ ]
                                                     [Ratmann et al., 2009]
ABC (simul’) advances


                         how approximative is ABC?                ABC as knn

   Simulating from the prior is often poor in efficiency
   Either modify the proposal distribution on θ to increase the density
   of x’s within the vicinity of y ...
        [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

   ...or by viewing the problem as a conditional density estimation
   and by developing techniques to allow for larger
                                               [Beaumont et al., 2002]

   .....or even by including     in the inferential framework [ABCµ ]
                                                     [Ratmann et al., 2009]
ABC-NP


  Better usage of [prior] simulations by
 adjustement: instead of throwing away
 θ such that ρ(η(z), η(y)) > , replace
 θ’s with locally regressed transforms

       θ∗ = θ − {η(z) − η(y)}T β
                               ^
                                            [Csill´ry et al., TEE, 2010]
                                                  e

         ^
   where β is obtained by [NP] weighted least square regression on
   (η(z) − η(y)) with weights

                           Kδ {ρ(η(z), η(y))}

                                    [Beaumont et al., 2002, Genetics]
ABC-NP (regression)



   Also found in the subsequent literature, e.g. in        Fearnhead-Prangle (2012)   :
   weight directly simulation by

                           Kδ {ρ(η(z(θ)), η(y))}

   or
                           S
                       1
                                 Kδ {ρ(η(zs (θ)), η(y))}
                       S
                           s=1

                                       [consistent estimate of f (η|θ)]
   Curse of dimensionality: poor estimate when d = dim(η) is large...
ABC-NP (regression)



   Also found in the subsequent literature, e.g. in        Fearnhead-Prangle (2012)   :
   weight directly simulation by

                           Kδ {ρ(η(z(θ)), η(y))}

   or
                           S
                       1
                                 Kδ {ρ(η(zs (θ)), η(y))}
                       S
                           s=1

                                       [consistent estimate of f (η|θ)]
   Curse of dimensionality: poor estimate when d = dim(η) is large...
ABC-NP (density estimation)



   Use of the kernel weights

                             Kδ {ρ(η(z(θ)), η(y))}

   leads to the NP estimate of the posterior expectation

                         i   θi Kδ {ρ(η(z(θi )), η(y))}
                             i Kδ {ρ(η(z(θi )), η(y))}

                                                          [Blum, JASA, 2010]
ABC-NP (density estimation)



   Use of the kernel weights

                            Kδ {ρ(η(z(θ)), η(y))}

   leads to the NP estimate of the posterior conditional density

                    i
                        ˜
                        Kb (θi − θ)Kδ {ρ(η(z(θi )), η(y))}
                            i Kδ {ρ(η(z(θi )), η(y))}

                                                     [Blum, JASA, 2010]
ABC-NP (density estimations)



   Other versions incorporating regression adjustments

                         i
                             ˜
                             Kb (θ∗ − θ)Kδ {ρ(η(z(θi )), η(y))}
                                   i
                                  i Kδ {ρ(η(z(θi )), η(y))}

   In all cases, error

      E[^ (θ|y)] − g (θ|y) = cb 2 + cδ2 + OP (b 2 + δ2 ) + OP (1/nδd )
        g
                               c
              var(^ (θ|y)) =
                  g                (1 + oP (1))
                             nbδd
ABC-NP (density estimations)



   Other versions incorporating regression adjustments

                         i
                             ˜
                             Kb (θ∗ − θ)Kδ {ρ(η(z(θi )), η(y))}
                                   i
                                  i Kδ {ρ(η(z(θi )), η(y))}

   In all cases, error

      E[^ (θ|y)] − g (θ|y) = cb 2 + cδ2 + OP (b 2 + δ2 ) + OP (1/nδd )
        g
                               c
              var(^ (θ|y)) =
                  g                (1 + oP (1))
                             nbδd
                                                         [Blum, JASA, 2010]
ABC-NP (density estimations)



   Other versions incorporating regression adjustments

                         i
                             ˜
                             Kb (θ∗ − θ)Kδ {ρ(η(z(θi )), η(y))}
                                   i
                                  i Kδ {ρ(η(z(θi )), η(y))}

   In all cases, error

      E[^ (θ|y)] − g (θ|y) = cb 2 + cδ2 + OP (b 2 + δ2 ) + OP (1/nδd )
        g
                               c
              var(^ (θ|y)) =
                  g                (1 + oP (1))
                             nbδd
                                                    [standard NP calculations]
ABC-NCH


  Incorporating non-linearities and heterocedasticities:

                                                 σ(η(y))
                                                 ^
                θ∗ = m(η(y)) + [θ − m(η(z))]
                     ^              ^
                                                 σ(η(z))
                                                 ^

  where
      m(η) estimated by non-linear regression (e.g., neural network)
      ^
      σ(η) estimated by non-linear regression on residuals
      ^

                     log{θi − m(ηi )}2 = log σ2 (ηi ) + ξi
                              ^

                                              [Blum & Fran¸ois, 2009]
                                                          c
ABC-NCH


  Incorporating non-linearities and heterocedasticities:

                                                 σ(η(y))
                                                 ^
                θ∗ = m(η(y)) + [θ − m(η(z))]
                     ^              ^
                                                 σ(η(z))
                                                 ^

  where
      m(η) estimated by non-linear regression (e.g., neural network)
      ^
      σ(η) estimated by non-linear regression on residuals
      ^

                     log{θi − m(ηi )}2 = log σ2 (ηi ) + ξi
                              ^

                                              [Blum & Fran¸ois, 2009]
                                                          c
ABC as knn


                                    [Biau et al., 2012, arxiv:1207.6461]

  Practice of ABC: determine tolerance         as a quantile on observed
  distances, say 10% or 1% quantile,

                        =   N   = qα (d1 , . . . , dN )

      Interpretation of ε as nonparametric bandwidth only
      approximation of the actual practice
                                            [Blum & Fran¸ois, 2010]
                                                        c

      ABC is a k-nearest neighbour (knn) method with kN = N N
                                    [Loftsgaarden & Quesenberry, 1965]
ABC as knn


                                    [Biau et al., 2012, arxiv:1207.6461]

  Practice of ABC: determine tolerance         as a quantile on observed
  distances, say 10% or 1% quantile,

                        =   N   = qα (d1 , . . . , dN )

      Interpretation of ε as nonparametric bandwidth only
      approximation of the actual practice
                                            [Blum & Fran¸ois, 2010]
                                                        c

      ABC is a k-nearest neighbour (knn) method with kN = N N
                                    [Loftsgaarden & Quesenberry, 1965]
ABC as knn


                                    [Biau et al., 2012, arxiv:1207.6461]

  Practice of ABC: determine tolerance         as a quantile on observed
  distances, say 10% or 1% quantile,

                        =   N   = qα (d1 , . . . , dN )

      Interpretation of ε as nonparametric bandwidth only
      approximation of the actual practice
                                            [Blum & Fran¸ois, 2010]
                                                        c

      ABC is a k-nearest neighbour (knn) method with kN = N N
                                    [Loftsgaarden & Quesenberry, 1965]
ABC consistency
   Provided

                 kN / log log N −→ ∞ and kN /N −→ 0

    as N → ∞, for almost all s0 (with respect to the distribution of
   S), with probability 1,
                          kN
                      1
                                ϕ(θj ) −→ E[ϕ(θj )|S = s0 ]
                     kN
                          j=1


                                                              [Devroye, 1982]
   Biau et al. (2012) also recall pointwise and integrated mean square error
   consistency results on the corresponding kernel estimate of the
   conditional posterior distribution, under constraints
                                                       p
            kN → ∞,       kN /N → 0,    hN → 0    and hN kN → ∞,
ABC consistency
   Provided

                 kN / log log N −→ ∞ and kN /N −→ 0

    as N → ∞, for almost all s0 (with respect to the distribution of
   S), with probability 1,
                          kN
                      1
                                ϕ(θj ) −→ E[ϕ(θj )|S = s0 ]
                     kN
                          j=1


                                                              [Devroye, 1982]
   Biau et al. (2012) also recall pointwise and integrated mean square error
   consistency results on the corresponding kernel estimate of the
   conditional posterior distribution, under constraints
                                                       p
            kN → ∞,       kN /N → 0,    hN → 0    and hN kN → ∞,
Rates of convergence


   Further assumptions (on target and kernel) allow for precise
   (integrated mean square) convergence rates (as a power of the
   sample size N), derived from classical k-nearest neighbour
   regression, like
                                                           4
       when m = 1, 2, 3, kN ≈ N (p+4)/(p+8) and rate N − p+8
                                                      4
       when m = 4, kN ≈ N (p+4)/(p+8) and rate N − p+8 log N
                                                          4
       when m > 4, kN ≈ N (p+4)/(m+p+4) and rate N − m+p+4
                                  [Biau et al., 2012, arxiv:1207.6461]


   Only applies to sufficient summary statistics
Rates of convergence


   Further assumptions (on target and kernel) allow for precise
   (integrated mean square) convergence rates (as a power of the
   sample size N), derived from classical k-nearest neighbour
   regression, like
                                                           4
       when m = 1, 2, 3, kN ≈ N (p+4)/(p+8) and rate N − p+8
                                                      4
       when m = 4, kN ≈ N (p+4)/(p+8) and rate N − p+8 log N
                                                          4
       when m > 4, kN ≈ N (p+4)/(m+p+4) and rate N − m+p+4
                                  [Biau et al., 2012, arxiv:1207.6461]


   Only applies to sufficient summary statistics
ABC inference machine

 Introduction

 ABC

 ABC as an inference machine
   Error inc.
   Exact BC and approximate
   targets
   summary statistic

 ABC for model choice

 Model choice consistency

 ABCel
How much Bayesian aBc is..?




      maybe a convergent method of inference (meaningful?
      sufficient? foreign?)
      approximation error unknown (w/o simulation)
      pragmatic Bayes (there is no other solution!)
      many calibration issues (tolerance, distance, statistics)
How much Bayesian aBc is..?




      maybe a convergent method of inference (meaningful?
      sufficient? foreign?)
      approximation error unknown (w/o simulation)
      pragmatic Bayes (there is no other solution!)
      many calibration issues (tolerance, distance, statistics)

                                          ...should Bayesians care?!
How much Bayesian aBc is..?




      maybe a convergent method of inference (meaningful?
      sufficient? foreign?)
      approximation error unknown (w/o simulation)
      pragmatic Bayes (there is no other solution!)
      many calibration issues (tolerance, distance, statistics)

                                                   yes they should!!!
How much Bayesian aBc is..?




      maybe a convergent method of inference (meaningful?
      sufficient? foreign?)
      approximation error unknown (w/o simulation)
      pragmatic Bayes (there is no other solution!)
      many calibration issues (tolerance, distance, statistics)

                                                                  to ABCel
ABCµ



  Idea Infer about the error as well as about the parameter:
  Use of a joint density

                f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( )

  where y is the data, and ξ( |y, θ) is the prior predictive density of
  ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
  Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
  approximation.
             [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
ABCµ



  Idea Infer about the error as well as about the parameter:
  Use of a joint density

                f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( )

  where y is the data, and ξ( |y, θ) is the prior predictive density of
  ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
  Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
  approximation.
             [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
ABCµ



  Idea Infer about the error as well as about the parameter:
  Use of a joint density

                f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( )

  where y is the data, and ξ( |y, θ) is the prior predictive density of
  ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
  Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
  approximation.
             [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
ABCµ details


   Multidimensional distances ρk (k = 1, . . . , K ) and errors
    k = ρk (ηk (z), ηk (y)), with

                                       1
    k   ∼ ξk ( |y, θ) ≈ ξk ( |y, θ) =
                        ^                       K [{   k −ρk (ηk (zb ), ηk (y))}/hk ]
                                      Bhk
                                            b

   then used in replacing ξ( |y, θ) with mink ξk ( |y, θ)
                                              ^
   ABCµ involves acceptance probability

                 π(θ , ) q(θ , θ)q( , ) mink ξk ( |y, θ )
                                             ^
                  π(θ, ) q(θ, θ )q( , ) mink ξk ( |y, θ)
                                              ^
ABCµ details


   Multidimensional distances ρk (k = 1, . . . , K ) and errors
    k = ρk (ηk (z), ηk (y)), with

                                       1
    k   ∼ ξk ( |y, θ) ≈ ξk ( |y, θ) =
                        ^                       K [{   k −ρk (ηk (zb ), ηk (y))}/hk ]
                                      Bhk
                                            b

   then used in replacing ξ( |y, θ) with mink ξk ( |y, θ)
                                              ^
   ABCµ involves acceptance probability

                 π(θ , ) q(θ , θ)q( , ) mink ξk ( |y, θ )
                                             ^
                  π(θ, ) q(θ, θ )q( , ) mink ξk ( |y, θ)
                                              ^
ABCµ multiple errors




                       [ c Ratmann et al., PNAS, 2009]
ABCµ for model choice




                        [ c Ratmann et al., PNAS, 2009]
Wilkinson’s exact BC (not exactly!)

   ABC approximation error (i.e. non-zero tolerance) replaced with
   exact simulation from a controlled approximation to the target,
   convolution of true posterior with kernel function

                               π(θ)f (z|θ)K (y − z)
               π (θ, z|y) =                            ,
                              π(θ)f (z|θ)K (y − z)dzdθ

   with K kernel parameterised by bandwidth .
                                                     [Wilkinson, 2008]

   Theorem
   The ABC algorithm based on the assumption of a randomised
   observation y = y + ξ, ξ ∼ K , and an acceptance probability of
                   ˜

                              K (y − z)/M

   gives draws from the posterior distribution π(θ|y).
Wilkinson’s exact BC (not exactly!)

   ABC approximation error (i.e. non-zero tolerance) replaced with
   exact simulation from a controlled approximation to the target,
   convolution of true posterior with kernel function

                               π(θ)f (z|θ)K (y − z)
               π (θ, z|y) =                            ,
                              π(θ)f (z|θ)K (y − z)dzdθ

   with K kernel parameterised by bandwidth .
                                                     [Wilkinson, 2008]

   Theorem
   The ABC algorithm based on the assumption of a randomised
   observation y = y + ξ, ξ ∼ K , and an acceptance probability of
                   ˜

                              K (y − z)/M

   gives draws from the posterior distribution π(θ|y).
How exact a BC?




      “Using to represent measurement error is
      straightforward, whereas using to model the model
      discrepancy is harder to conceptualize and not as
      commonly used”
                                       [Richard Wilkinson, 2008]
How exact a BC?

  Pros
         Pseudo-data from true model and observed data from noisy
         model
         Interesting perspective in that outcome is completely
         controlled
         Link with ABCµ and assuming y is observed with a
         measurement error with density K
         Relates to the theory of model approximation
                                               [Kennedy & O’Hagan, 2001]
  Cons
         Requires K to be bounded by M
         True approximation error never assessed
         Requires a modification of the standard ABC algorithm
ABC for HMMs



  Specific case of a hidden     Markov model



                             Xt+1 ∼ Qθ (Xt , ·)
                             Yt+1 ∼ gθ (·|xt )

  where only y0 is observed.
              1:n
                                   [Dean, Singh, Jasra, & Peters, 2011]
  Use of specific constraints, adapted to the Markov structure:

                y1 ∈ B(y1 , ) × · · · × yn ∈ B(yn , )
                        0                       0
ABC for HMMs



  Specific case of a hidden     Markov model



                             Xt+1 ∼ Qθ (Xt , ·)
                             Yt+1 ∼ gθ (·|xt )

  where only y0 is observed.
              1:n
                                   [Dean, Singh, Jasra, & Peters, 2011]
  Use of specific constraints, adapted to the Markov structure:

                y1 ∈ B(y1 , ) × · · · × yn ∈ B(yn , )
                        0                       0
ABC-MLE for HMMs


  ABC-MLE defined by

         θn = arg max Pθ Y1 ∈ B(y1 , ), . . . , Yn ∈ B(yn , )
         ^                       0                      0
                    θ


  Exact MLE for the likelihood       same basis as Wilkinson!


                                  0
                             pθ (y1 , . . . , yn )

  corresponding to the perturbed process

                 (xt , yt + zt )1   t n       zt ∼ U(B(0, 1)

                                    [Dean, Singh, Jasra, & Peters, 2011]
ABC-MLE for HMMs


  ABC-MLE defined by

         θn = arg max Pθ Y1 ∈ B(y1 , ), . . . , Yn ∈ B(yn , )
         ^                       0                      0
                    θ


  Exact MLE for the likelihood       same basis as Wilkinson!


                                  0
                             pθ (y1 , . . . , yn )

  corresponding to the perturbed process

                 (xt , yt + zt )1   t n       zt ∼ U(B(0, 1)

                                    [Dean, Singh, Jasra, & Peters, 2011]
ABC-MLE is biased



      ABC-MLE is asymptotically (in n) biased with target

                      l (θ) = Eθ∗ [log pθ (Y1 |Y−∞:0 )]


      but ABC-MLE converges to true value in the sense

                             l n (θn ) → l (θ)

      for all sequences (θn ) converging to θ and   n
ABC-MLE is biased



      ABC-MLE is asymptotically (in n) biased with target

                      l (θ) = Eθ∗ [log pθ (Y1 |Y−∞:0 )]


      but ABC-MLE converges to true value in the sense

                             l n (θn ) → l (θ)

      for all sequences (θn ) converging to θ and   n
Noisy ABC-MLE



  Idea: Modify instead the data from the start
                        0
                      (y1 + ζ1 , . . . , yn + ζn )

                                                     [   see Fearnhead-Prangle   ]
  noisy ABC-MLE estimate

     arg max Pθ Y1 ∈ B(y1 + ζ1 , ), . . . , Yn ∈ B(yn + ζn , )
                        0                           0
          θ

                                [Dean, Singh, Jasra, & Peters, 2011]
Consistent noisy ABC-MLE




      Degrading the data improves the estimation performances:
          Noisy ABC-MLE is asymptotically (in n) consistent
          under further assumptions, the noisy ABC-MLE is
          asymptotically normal
          increase in variance of order −2
      likely degradation in precision or computing time due to the
      lack of summary statistic [curse of dimensionality]
SMC for ABC likelihood

   Algorithm 2 SMC ABC for HMMs
     Given θ
     for k = 1, . . . , n do
                             1 1                N    N
       generate proposals (xk , yk ), . . . , (xk , yk ) from the model
       weigh each proposal with ωk    l =I                     l
                                               B(yk + ζk , ) (yk )
                                                   0

                                                         l
       renormalise the weights and sample the xk ’s accordingly
     end for
     approximate the likelihood by
                                n      N
                                             ωlk N
                               k=1     l=1




                                    [Jasra, Singh, Martin, & McCoy, 2010]
Which summary?


  Fundamental difficulty of the choice of the summary statistic when
  there is no non-trivial sufficient statistics
  Starting from a large collection of summary statistics is available,
  Joyce and Marjoram (2008) consider the sequential inclusion into
  the ABC target, with a stopping rule based on a likelihood ratio
  test
      Not taking into account the sequential nature of the tests
      Depends on parameterisation
      Order of inclusion matters
      likelihood ratio test?!
Which summary?


  Fundamental difficulty of the choice of the summary statistic when
  there is no non-trivial sufficient statistics
  Starting from a large collection of summary statistics is available,
  Joyce and Marjoram (2008) consider the sequential inclusion into
  the ABC target, with a stopping rule based on a likelihood ratio
  test
      Not taking into account the sequential nature of the tests
      Depends on parameterisation
      Order of inclusion matters
      likelihood ratio test?!
Which summary?


  Fundamental difficulty of the choice of the summary statistic when
  there is no non-trivial sufficient statistics
  Starting from a large collection of summary statistics is available,
  Joyce and Marjoram (2008) consider the sequential inclusion into
  the ABC target, with a stopping rule based on a likelihood ratio
  test
      Not taking into account the sequential nature of the tests
      Depends on parameterisation
      Order of inclusion matters
      likelihood ratio test?!
Which summary for model choice?



   Depending on the choice of η(·), the Bayes factor based on this
   insufficient statistic,

                    η          π1 (θ1 )f1η (η(y)|θ1 ) dθ1
                   B12 (y) =                              ,
                               π2 (θ2 )f2η (η(y)|θ2 ) dθ2

   is consistent or not.
                                    [X, Cornuet, Marin, & Pillai, 2012]
   Consistency only depends on the range of Ei [η(y)] under both
   models.
                                [Marin, Pillai, X, & Rousseau, 2012]
Which summary for model choice?



   Depending on the choice of η(·), the Bayes factor based on this
   insufficient statistic,

                    η          π1 (θ1 )f1η (η(y)|θ1 ) dθ1
                   B12 (y) =                              ,
                               π2 (θ2 )f2η (η(y)|θ2 ) dθ2

   is consistent or not.
                                    [X, Cornuet, Marin, & Pillai, 2012]
   Consistency only depends on the range of Ei [η(y)] under both
   models.
                                [Marin, Pillai, X, & Rousseau, 2012]
Semi-automatic ABC



  Fearnhead and Prangle (2010) study ABC and the selection of the
  summary statistic in close proximity to Wilkinson’s proposal
      ABC considered as inferential method and calibrated as such
      randomised (or ‘noisy’) version of the summary statistics

                            ˜
                            η(y) = η(y) + τ

      derivation of a well-calibrated version of ABC, i.e. an
      algorithm that gives proper predictions for the distribution
      associated with this randomised summary statistic
Summary [of F&P/statistics)


      optimality of the posterior expectation

                                  E[θ|y]

      of the parameter of interest as summary statistics η(y)!
      use of the standard quadratic loss function

                           (θ − θ0 )T A(θ − θ0 ) .

      recent extension to model choice, optimality of Bayes factor

                                  B12 (y)

                                                     [F&P, ISBA 2012 talk]
Summary [of F&P/statistics)


      optimality of the posterior expectation

                                  E[θ|y]

      of the parameter of interest as summary statistics η(y)!
      use of the standard quadratic loss function

                           (θ − θ0 )T A(θ − θ0 ) .

      recent extension to model choice, optimality of Bayes factor

                                  B12 (y)

                                                     [F&P, ISBA 2012 talk]
Conclusion



      Choice of summary statistics is paramount for ABC
      validation/performance
      At best, ABC approximates π(. | η(y))
      Model selection feasible with ABC [with caution!]
      For estimation, consistency if {θ; µ(θ) = µ0 } = θ0
      For testing consistency if
      {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
                                                        [Marin et al., 2011]
Conclusion



      Choice of summary statistics is paramount for ABC
      validation/performance
      At best, ABC approximates π(. | η(y))
      Model selection feasible with ABC [with caution!]
      For estimation, consistency if {θ; µ(θ) = µ0 } = θ0
      For testing consistency if
      {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
                                                        [Marin et al., 2011]
Conclusion



      Choice of summary statistics is paramount for ABC
      validation/performance
      At best, ABC approximates π(. | η(y))
      Model selection feasible with ABC [with caution!]
      For estimation, consistency if {θ; µ(θ) = µ0 } = θ0
      For testing consistency if
      {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
                                                        [Marin et al., 2011]
Conclusion



      Choice of summary statistics is paramount for ABC
      validation/performance
      At best, ABC approximates π(. | η(y))
      Model selection feasible with ABC [with caution!]
      For estimation, consistency if {θ; µ(θ) = µ0 } = θ0
      For testing consistency if
      {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
                                                        [Marin et al., 2011]
Conclusion



      Choice of summary statistics is paramount for ABC
      validation/performance
      At best, ABC approximates π(. | η(y))
      Model selection feasible with ABC [with caution!]
      For estimation, consistency if {θ; µ(θ) = µ0 } = θ0
      For testing consistency if
      {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
                                                        [Marin et al., 2011]
ABC for model choice


Introduction

ABC

ABC as an inference machine

ABC for model choice
  BMC Principle
  Gibbs random fields (counterexample)
  Generic ABC model choice

Model choice consistency

ABCel
Bayesian model choice



   BMC Principle
   Several models
                              M1 , M2 , . . .
   are considered simultaneously for dataset y and model index M
   central to inference.
   Use of
       prior π(M = m), plus
       prior distribution on the parameter conditional on the value m
       of the model index, πm (θm )
Bayesian model choice



   BMC Principle
   Several models
                             M1 , M2 , . . .
   are considered simultaneously for dataset y and model index M
   central to inference.
   Goal is to derive the posterior distribution of M,

                           π(M = m|data)

   a challenging computational target when models are complex.
Generic ABC for model choice



   Algorithm 3 Likelihood-free model choice sampler (ABC-MC)
     for t = 1 to T do
       repeat
          Generate m from the prior π(M = m)
          Generate θm from the prior πm (θm )
          Generate z from the model fm (z|θm )
       until ρ{η(z), η(y)} <
       Set m(t) = m and θ(t) = θm
     end for

                            [Grelaud & al., 2009; Toni & al., 2009]
ABC estimates




  Posterior probability π(M = m|y) approximated by the frequency
  of acceptances from model m
                              T
                          1
                                    Im(t) =m .
                          T
                              t=1
ABC estimates



  Posterior probability π(M = m|y) approximated by the frequency
  of acceptances from model m
                              T
                          1
                                    Im(t) =m .
                          T
                              t=1

   Extension to a weighted polychotomous logistic regression
  estimate of π(M = m|y), with non-parametric kernel weights
                                    [Cornuet et al., DIYABC, 2009]
Potts model

    Skip MRFs


   Potts model
   Distribution with an energy function of the form

                          θS(y) = θ            δyl =yi
                                         l∼i

   where l∼i denotes a neighbourhood structure

   In most realistic settings, summation

                          Zθ =         exp{θS(x)}
                                 x∈X

   involves too many terms to be manageable and numerical
   approximations cannot always be trusted
Potts model

    Skip MRFs


   Potts model
   Distribution with an energy function of the form

                          θS(y) = θ            δyl =yi
                                         l∼i

   where l∼i denotes a neighbourhood structure

   In most realistic settings, summation

                          Zθ =         exp{θS(x)}
                                 x∈X

   involves too many terms to be manageable and numerical
   approximations cannot always be trusted
Neighbourhood relations



   Setup
   Choice to be made between M neighbourhood relations
                        m
                       i ∼i      (0           m    M − 1)

   with
                            Sm (x) =          I{xi =xi   }
                                        m
                                       i ∼i

   driven by the posterior probabilities of the models.
Model index



   Computational target:

          P(M = m|x) ∝          fm (x|θm )πm (θm ) dθm π(M = m)
                           Θm

   If S(x) sufficient statistic for the joint parameters
   (M, θ0 , . . . , θM−1 ),

                    P(M = m|x) = P(M = m|S(x)) .
Model index



   Computational target:

          P(M = m|x) ∝          fm (x|θm )πm (θm ) dθm π(M = m)
                           Θm

   If S(x) sufficient statistic for the joint parameters
   (M, θ0 , . . . , θM−1 ),

                    P(M = m|x) = P(M = m|S(x)) .
Sufficient statistics in Gibbs random fields



   Each model m has its own sufficient statistic Sm (·) and
   S(·) = (S0 (·), . . . , SM−1 (·)) is also (model-)sufficient.
   Explanation: For Gibbs random fields,

            x|M = m ∼ fm (x|θm ) = fm (x|S(x))fm (S(x)|θm )
                                    1            2

                                       1
                                 =          f 2 (S(x)|θm )
                                   n(S(x)) m

   where
                    n(S(x)) = {x ∈ X : S(x) = S(x)}
                               ˜         ˜
             c S(x) is sufficient for the joint parameters
Sufficient statistics in Gibbs random fields



   Each model m has its own sufficient statistic Sm (·) and
   S(·) = (S0 (·), . . . , SM−1 (·)) is also (model-)sufficient.
   Explanation: For Gibbs random fields,

            x|M = m ∼ fm (x|θm ) = fm (x|S(x))fm (S(x)|θm )
                                    1            2

                                       1
                                 =          f 2 (S(x)|θm )
                                   n(S(x)) m

   where
                    n(S(x)) = {x ∈ X : S(x) = S(x)}
                               ˜         ˜
             c S(x) is sufficient for the joint parameters
Toy example


  iid Bernoulli model versus two-state first-order Markov chain, i.e.
                                       n
            f0 (x|θ0 ) = exp θ0             I{xi =1}      {1 + exp(θ0 )}n ,
                                   i=1

  versus
                                   n
                       1
        f1 (x|θ1 ) =     exp θ1            I{xi =xi−1 }    {1 + exp(θ1 )}n−1 ,
                       2
                                  i=2

  with priors θ0 ∼ U(−5, 5) and θ1 ∼ U(0, 6) (inspired by “phase
  transition” boundaries).
About sufficiency




  If η1 (x) sufficient statistic for model m = 1 and parameter θ1 and
  η2 (x) sufficient statistic for model m = 2 and parameter θ2 ,
  (η1 (x), η2 (x)) is not always sufficient for (m, θm )
        c Potential loss of information at the testing level
About sufficiency




  If η1 (x) sufficient statistic for model m = 1 and parameter θ1 and
  η2 (x) sufficient statistic for model m = 2 and parameter θ2 ,
  (η1 (x), η2 (x)) is not always sufficient for (m, θm )
        c Potential loss of information at the testing level
Poisson/geometric example




   Sample
                            x = (x1 , . . . , xn )
   from either a Poisson P(λ) or from a geometric G(p)
   Sum
                                  n
                           S=          xi = η(x)
                                 i=1

   sufficient statistic for either model but not simultaneously
Limiting behaviour of B12 (T → ∞)

   ABC approximation
                                T
                                t=1 Imt =1 Iρ{η(zt ),η(y)}
                  B12 (y) =     T
                                                                ,
                                t=1 Imt =2 Iρ{η(zt ),η(y)}

   where the (mt , z t )’s are simulated from the (joint) prior
   As T goes to infinity, limit

                            Iρ{η(z),η(y)}      π1 (θ1 )f1 (z|θ1 ) dz dθ1
           B12 (y) =
                            Iρ{η(z),η(y)}      π2 (θ2 )f2 (z|θ2 ) dz dθ2
                            Iρ{η,η(y)}      π1 (θ1 )f1η (η|θ1 ) dη dθ1
                     =                                                 ,
                            Iρ{η,η(y)}      π2 (θ2 )f2η (η|θ2 ) dη dθ2

   where f1η (η|θ1 ) and f2η (η|θ2 ) distributions of η(z)
Limiting behaviour of B12 (T → ∞)

   ABC approximation
                                T
                                t=1 Imt =1 Iρ{η(zt ),η(y)}
                  B12 (y) =     T
                                                                ,
                                t=1 Imt =2 Iρ{η(zt ),η(y)}

   where the (mt , z t )’s are simulated from the (joint) prior
   As T goes to infinity, limit

                            Iρ{η(z),η(y)}      π1 (θ1 )f1 (z|θ1 ) dz dθ1
           B12 (y) =
                            Iρ{η(z),η(y)}      π2 (θ2 )f2 (z|θ2 ) dz dθ2
                            Iρ{η,η(y)}      π1 (θ1 )f1η (η|θ1 ) dη dθ1
                     =                                                 ,
                            Iρ{η,η(y)}      π2 (θ2 )f2η (η|θ2 ) dη dθ2

   where f1η (η|θ1 ) and f2η (η|θ2 ) distributions of η(z)
Limiting behaviour of B12 ( → 0)




   When    goes to zero,

                   η            π1 (θ1 )f1η (η(y)|θ1 ) dθ1
                  B12 (y)   =
                                π2 (θ2 )f2η (η(y)|θ2 ) dθ2

          c Bayes factor based on the sole observation of η(y)
Limiting behaviour of B12 ( → 0)




   When    goes to zero,

                   η            π1 (θ1 )f1η (η(y)|θ1 ) dθ1
                  B12 (y)   =
                                π2 (θ2 )f2η (η(y)|θ2 ) dθ2

          c Bayes factor based on the sole observation of η(y)
Limiting behaviour of B12 (under sufficiency)


   If η(y) sufficient statistic in both models,

                        fi (y|θi ) = gi (y)fi η (η(y)|θi )

   Thus

                   Θ1   π(θ1 )g1 (y)f1η (η(y)|θ1 ) dθ1
     B12 (y) =
                   Θ2   π(θ2 )g2 (y)f2η (η(y)|θ2 ) dθ2
                  g1 (y)     π1 (θ1 )f1η (η(y)|θ1 ) dθ1   g1 (y) η
              =                                         =       B (y) .
                  g2 (y)     π2 (θ2 )f2η (η(y)|θ2 ) dθ2   g2 (y) 12

                           [Didelot, Everitt, Johansen & Lawson, 2011]
          c No discrepancy only when cross-model sufficiency
Limiting behaviour of B12 (under sufficiency)


   If η(y) sufficient statistic in both models,

                        fi (y|θi ) = gi (y)fi η (η(y)|θi )

   Thus

                   Θ1   π(θ1 )g1 (y)f1η (η(y)|θ1 ) dθ1
     B12 (y) =
                   Θ2   π(θ2 )g2 (y)f2η (η(y)|θ2 ) dθ2
                  g1 (y)     π1 (θ1 )f1η (η(y)|θ1 ) dθ1   g1 (y) η
              =                                         =       B (y) .
                  g2 (y)     π2 (θ2 )f2η (η(y)|θ2 ) dθ2   g2 (y) 12

                           [Didelot, Everitt, Johansen & Lawson, 2011]
          c No discrepancy only when cross-model sufficiency
Poisson/geometric example (back)



   Sample
                           x = (x1 , . . . , xn )
   from either a Poisson P(λ) or from a geometric G(p)
   Discrepancy ratio

                        g1 (x)   S!n−S / i xi !
                               =
                        g2 (x)    1 n+S−1
                                        S
Poisson/geometric discrepancy

                            η
   Range of B12 (x) versus B12 (x): The values produced have nothing
   in common.
Formal recovery




   Creating an encompassing exponential family

   f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θT η1 (x) + θT η2 (x) + α1 t1 (x) + α2 t2 (x)}
                                   1           2

   leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x))
                          [Didelot, Everitt, Johansen & Lawson, 2011]
Formal recovery



   Creating an encompassing exponential family

   f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θT η1 (x) + θT η2 (x) + α1 t1 (x) + α2 t2 (x)}
                                   1           2

   leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x))
                          [Didelot, Everitt, Johansen & Lawson, 2011]

   In the Poisson/geometric case, if        i   xi ! is added to S, no
   discrepancy
Formal recovery



   Creating an encompassing exponential family

   f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θT η1 (x) + θT η2 (x) + α1 t1 (x) + α2 t2 (x)}
                                   1           2

   leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x))
                          [Didelot, Everitt, Johansen & Lawson, 2011]

   Only applies in genuine sufficiency settings...
    c Inability to evaluate information loss due to summary
   statistics
Meaning of the ABC-Bayes factor




   The ABC approximation to the Bayes Factor is based solely on the
   summary statistics....
   In the Poisson/geometric case, if E[yi ] = θ0 > 0,

                          η          (θ0 + 1)2 −θ0
                     lim B12 (y) =            e
                    n→∞                 θ0
Meaning of the ABC-Bayes factor




   The ABC approximation to the Bayes Factor is based solely on the
   summary statistics....
   In the Poisson/geometric case, if E[yi ] = θ0 > 0,

                          η          (θ0 + 1)2 −θ0
                     lim B12 (y) =            e
                    n→∞                 θ0
MA example




                        1.0




                                      1.0




                                                    1.0




                                                                  1.0
                        0.8




                                      0.8




                                                    0.8




                                                                  0.8
                        0.6




                                      0.6




                                                    0.6




                                                                  0.6
                        0.4




                                      0.4




                                                    0.4




                                                                  0.4
                        0.2




                                      0.2




                                                    0.2




                                                                  0.2
                        0.0




                                      0.0




                                                    0.0




                                                                  0.0
                              1   2         1   2         1   2         1   2




  Evolution [against ] of ABC Bayes factor, in terms of frequencies of
  visits to models MA(1) (left) and MA(2) (right) when equal to
  10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sample
  of 50 points from a MA(2) with θ1 = 0.6, θ2 = 0.2. True Bayes factor
  equal to 17.71.
MA example




                        1.0




                                      1.0




                                                    1.0




                                                                  1.0
                        0.8




                                      0.8




                                                    0.8




                                                                  0.8
                        0.6




                                      0.6




                                                    0.6




                                                                  0.6
                        0.4




                                      0.4




                                                    0.4




                                                                  0.4
                        0.2




                                      0.2




                                                    0.2




                                                                  0.2
                        0.0




                                      0.0




                                                    0.0




                                                                  0.0
                              1   2         1   2         1   2         1   2




  Evolution [against ] of ABC Bayes factor, in terms of frequencies of
  visits to models MA(1) (left) and MA(2) (right) when equal to
  10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sample
  of 50 points from a MA(1) model with θ1 = 0.6. True Bayes factor B21
  equal to .004.
The only safe cases??? [circa April 2011]



   Besides specific models like Gibbs random fields,
   using distances over the data itself escapes the discrepancy...
                            [Toni & Stumpf, 2010; Sousa & al., 2009]

   ...and so does the use of more informal model fitting measures
                                              [Ratmann & al., 2009]

   ...or use another type of approximation like empirical likelihood
                  [Mengersen et al., 2012, see Kerrie’s ASC 2012 talk]
The only safe cases??? [circa April 2011]



   Besides specific models like Gibbs random fields,
   using distances over the data itself escapes the discrepancy...
                            [Toni & Stumpf, 2010; Sousa & al., 2009]

   ...and so does the use of more informal model fitting measures
                                              [Ratmann & al., 2009]

   ...or use another type of approximation like empirical likelihood
                  [Mengersen et al., 2012, see Kerrie’s ASC 2012 talk]
The only safe cases??? [circa April 2011]



   Besides specific models like Gibbs random fields,
   using distances over the data itself escapes the discrepancy...
                            [Toni & Stumpf, 2010; Sousa & al., 2009]

   ...and so does the use of more informal model fitting measures
                                              [Ratmann & al., 2009]

   ...or use another type of approximation like empirical likelihood
                  [Mengersen et al., 2012, see Kerrie’s ASC 2012 talk]
ABC model choice consistency


Introduction

ABC

ABC as an inference machine

ABC for model choice

Model choice consistency
  Formalised framework
  Consistency results
  Summary statistics

ABCel
The starting point



   Central question to the validation of ABC for model choice:

   When is a Bayes factor based on an insufficient statistic T(y)
                           consistent?

                                            T
   Note/warnin: c drawn on T(y) through B12 (y) necessarily differs
   from c drawn on y through B12 (y)
                        [Marin, Pillai, X, & Rousseau, arXiv, 2012]
The starting point



   Central question to the validation of ABC for model choice:

   When is a Bayes factor based on an insufficient statistic T(y)
                           consistent?

                                            T
   Note/warnin: c drawn on T(y) through B12 (y) necessarily differs
   from c drawn on y through B12 (y)
                        [Marin, Pillai, X, & Rousseau, arXiv, 2012]
A benchmark if toy example




   Comparison suggested by referee of PNAS paper [thanks!]:
                             [X, Cornuet, Marin, & Pillai, Aug. 2011]
   Model M1 : y ∼ N(θ1 , 1) opposed
                              √
   to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2
                         √
   and scale parameter 1/ 2 (variance one).
A benchmark if toy example



   Comparison suggested by referee of PNAS paper [thanks!]:
                              [X, Cornuet, Marin, & Pillai, Aug. 2011]
   Model M1 : y ∼ N(θ1 , 1) opposed
                               √
   to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2
                            √
   and scale parameter 1/ 2 (variance one).
   Four possible statistics
    1. sample mean y (sufficient for M1 if not M2 );
    2. sample median med(y) (insufficient);
    3. sample variance var(y) (ancillary);
    4. median absolute deviation mad(y) = med(|y − med(y)|);
A benchmark if toy example

   Comparison suggested by referee of PNAS paper [thanks!]:
                             [X, Cornuet, Marin, & Pillai, Aug. 2011]
   Model M1 : y ∼ N(θ1 , 1) opposed
                              √
   to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2
                         √
   and scale parameter 1/ 2 (variance one).
                   n=100
    0.7
    0.6
    0.5
    0.4




             q
    0.3




             q
             q

             q
    0.2
    0.1




                              q
                              q
                              q

                              q
                              q
    0.0




                              q
                              q



           Gauss           Laplace
A benchmark if toy example

   Comparison suggested by referee of PNAS paper [thanks!]:
                             [X, Cornuet, Marin, & Pillai, Aug. 2011]
   Model M1 : y ∼ N(θ1 , 1) opposed
                              √
   to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2
                         √
   and scale parameter 1/ 2 (variance one).
                   n=100                           n=100




                                     1.0
    0.7




                                                              q

                                                              q
    0.6




                                                              q




                                     0.8
                                                              q

                                                              q
    0.5




                                                              q
                                                              q

                                                              q
                                     0.6
                                                              q
    0.4




                                                              q

                                             q
             q                               q
    0.3




             q
             q
                                     0.4




                                             q
             q
    0.2




                                     0.2




                                             q
    0.1




                              q
                              q
                              q              q
                              q              q

                              q
                                     0.0
    0.0




                              q
                              q



           Gauss           Laplace         Gauss           Laplace
Framework


  Starting from sample

                            y = (y1 , . . . , yn )

  the observed sample, not necessarily iid with true distribution

                                  y ∼ Pn

  Summary statistics

            T(y) = Tn = (T1 (y), T2 (y), · · · , Td (y)) ∈ Rd

  with true distribution Tn ∼ Gn .
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood
ABC and empirical likelihood

More Related Content

What's hot

Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Umberto Picchini
 
Chapter 3 projection
Chapter 3 projectionChapter 3 projection
Chapter 3 projectionNBER
 
A copula-based Simulation Method for Clustered Multi-State Survival Data
A copula-based Simulation Method for Clustered Multi-State Survival DataA copula-based Simulation Method for Clustered Multi-State Survival Data
A copula-based Simulation Method for Clustered Multi-State Survival Datafedericorotolo
 
Lecture on solving1
Lecture on solving1Lecture on solving1
Lecture on solving1NBER
 
Chapter 2 pertubation
Chapter 2 pertubationChapter 2 pertubation
Chapter 2 pertubationNBER
 
Monash University short course, part II
Monash University short course, part IIMonash University short course, part II
Monash University short course, part IIChristian Robert
 
Introduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsIntroduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsUniversity of Salerno
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical MethodsChristian Robert
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical MethodsChristian Robert
 
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...SYRTO Project
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical MethodsChristian Robert
 
Scalable inference for a full multivariate stochastic volatility
Scalable inference for a full multivariate stochastic volatilityScalable inference for a full multivariate stochastic volatility
Scalable inference for a full multivariate stochastic volatilitySYRTO Project
 
Nber slides11 lecture2
Nber slides11 lecture2Nber slides11 lecture2
Nber slides11 lecture2NBER
 
Using Vector Clocks to Visualize Communication Flow
Using Vector Clocks to Visualize Communication FlowUsing Vector Clocks to Visualize Communication Flow
Using Vector Clocks to Visualize Communication FlowMartin Harrigan
 
Differential Geometry
Differential GeometryDifferential Geometry
Differential Geometrylapuyade
 
On clustering financial time series - A need for distances between dependent ...
On clustering financial time series - A need for distances between dependent ...On clustering financial time series - A need for distances between dependent ...
On clustering financial time series - A need for distances between dependent ...Gautier Marti
 
A brief introduction to Hartree-Fock and TDDFT
A brief introduction to Hartree-Fock and TDDFTA brief introduction to Hartree-Fock and TDDFT
A brief introduction to Hartree-Fock and TDDFTJiahao Chen
 

What's hot (20)

Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
 
Chapter 3 projection
Chapter 3 projectionChapter 3 projection
Chapter 3 projection
 
A copula-based Simulation Method for Clustered Multi-State Survival Data
A copula-based Simulation Method for Clustered Multi-State Survival DataA copula-based Simulation Method for Clustered Multi-State Survival Data
A copula-based Simulation Method for Clustered Multi-State Survival Data
 
Lecture on solving1
Lecture on solving1Lecture on solving1
Lecture on solving1
 
Chapter 2 pertubation
Chapter 2 pertubationChapter 2 pertubation
Chapter 2 pertubation
 
Monash University short course, part II
Monash University short course, part IIMonash University short course, part II
Monash University short course, part II
 
Introduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsIntroduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov Chains
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Scalable inference for a full multivariate stochastic volatility
Scalable inference for a full multivariate stochastic volatilityScalable inference for a full multivariate stochastic volatility
Scalable inference for a full multivariate stochastic volatility
 
Iteration Techniques
Iteration TechniquesIteration Techniques
Iteration Techniques
 
Bayesian Core: Chapter 6
Bayesian Core: Chapter 6Bayesian Core: Chapter 6
Bayesian Core: Chapter 6
 
NC time seminar
NC time seminarNC time seminar
NC time seminar
 
Nber slides11 lecture2
Nber slides11 lecture2Nber slides11 lecture2
Nber slides11 lecture2
 
Using Vector Clocks to Visualize Communication Flow
Using Vector Clocks to Visualize Communication FlowUsing Vector Clocks to Visualize Communication Flow
Using Vector Clocks to Visualize Communication Flow
 
Differential Geometry
Differential GeometryDifferential Geometry
Differential Geometry
 
On clustering financial time series - A need for distances between dependent ...
On clustering financial time series - A need for distances between dependent ...On clustering financial time series - A need for distances between dependent ...
On clustering financial time series - A need for distances between dependent ...
 
A brief introduction to Hartree-Fock and TDDFT
A brief introduction to Hartree-Fock and TDDFTA brief introduction to Hartree-Fock and TDDFT
A brief introduction to Hartree-Fock and TDDFT
 

Viewers also liked

Séminaire de Physique à Besancon, Nov. 22, 2012
Séminaire de Physique à Besancon, Nov. 22, 2012Séminaire de Physique à Besancon, Nov. 22, 2012
Séminaire de Physique à Besancon, Nov. 22, 2012Christian Robert
 
Let's Practice What We Preach: Likelihood Methods for Monte Carlo Data
Let's Practice What We Preach: Likelihood Methods for Monte Carlo DataLet's Practice What We Preach: Likelihood Methods for Monte Carlo Data
Let's Practice What We Preach: Likelihood Methods for Monte Carlo DataChristian Robert
 
Gelfand and Smith (1990), read by
Gelfand and Smith (1990), read byGelfand and Smith (1990), read by
Gelfand and Smith (1990), read byChristian Robert
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixturesChristian Robert
 
Reading Efron's 1979 paper on bootstrap
Reading Efron's 1979 paper on bootstrapReading Efron's 1979 paper on bootstrap
Reading Efron's 1979 paper on bootstrapChristian Robert
 
ABC short course: survey chapter
ABC short course: survey chapterABC short course: survey chapter
ABC short course: survey chapterChristian Robert
 
ABC short course: final chapters
ABC short course: final chaptersABC short course: final chapters
ABC short course: final chaptersChristian Robert
 
ABC short course: introduction chapters
ABC short course: introduction chaptersABC short course: introduction chapters
ABC short course: introduction chaptersChristian Robert
 
ABC short course: model choice chapter
ABC short course: model choice chapterABC short course: model choice chapter
ABC short course: model choice chapterChristian Robert
 
State of the Word 2016
State of the Word 2016State of the Word 2016
State of the Word 2016photomatt
 

Viewers also liked (12)

ABC workshop: 17w5025
ABC workshop: 17w5025ABC workshop: 17w5025
ABC workshop: 17w5025
 
Séminaire de Physique à Besancon, Nov. 22, 2012
Séminaire de Physique à Besancon, Nov. 22, 2012Séminaire de Physique à Besancon, Nov. 22, 2012
Séminaire de Physique à Besancon, Nov. 22, 2012
 
Let's Practice What We Preach: Likelihood Methods for Monte Carlo Data
Let's Practice What We Preach: Likelihood Methods for Monte Carlo DataLet's Practice What We Preach: Likelihood Methods for Monte Carlo Data
Let's Practice What We Preach: Likelihood Methods for Monte Carlo Data
 
Gelfand and Smith (1990), read by
Gelfand and Smith (1990), read byGelfand and Smith (1990), read by
Gelfand and Smith (1990), read by
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixtures
 
Hastings paper discussion
Hastings paper discussionHastings paper discussion
Hastings paper discussion
 
Reading Efron's 1979 paper on bootstrap
Reading Efron's 1979 paper on bootstrapReading Efron's 1979 paper on bootstrap
Reading Efron's 1979 paper on bootstrap
 
ABC short course: survey chapter
ABC short course: survey chapterABC short course: survey chapter
ABC short course: survey chapter
 
ABC short course: final chapters
ABC short course: final chaptersABC short course: final chapters
ABC short course: final chapters
 
ABC short course: introduction chapters
ABC short course: introduction chaptersABC short course: introduction chapters
ABC short course: introduction chapters
 
ABC short course: model choice chapter
ABC short course: model choice chapterABC short course: model choice chapter
ABC short course: model choice chapter
 
State of the Word 2016
State of the Word 2016State of the Word 2016
State of the Word 2016
 

Similar to ABC and empirical likelihood

ABC & Empirical Lkd
ABC & Empirical LkdABC & Empirical Lkd
ABC & Empirical LkdDeb Roy
 
(Approximate) Bayesian computation as a new empirical Bayes (something)?
(Approximate) Bayesian computation as a new empirical Bayes (something)?(Approximate) Bayesian computation as a new empirical Bayes (something)?
(Approximate) Bayesian computation as a new empirical Bayes (something)?Christian Robert
 
Pittsburgh and Toronto "Halloween US trip" seminars
Pittsburgh and Toronto "Halloween US trip" seminarsPittsburgh and Toronto "Halloween US trip" seminars
Pittsburgh and Toronto "Halloween US trip" seminarsChristian Robert
 
slides of ABC talk at i-like workshop, Warwick, May 16
slides of ABC talk at i-like workshop, Warwick, May 16slides of ABC talk at i-like workshop, Warwick, May 16
slides of ABC talk at i-like workshop, Warwick, May 16Christian Robert
 
NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015Christian Robert
 
Approximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forestsApproximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forestsChristian Robert
 
[A]BCel : a presentation at ABC in Roma
[A]BCel : a presentation at ABC in Roma[A]BCel : a presentation at ABC in Roma
[A]BCel : a presentation at ABC in RomaChristian Robert
 
Considerate Approaches to ABC Model Selection
Considerate Approaches to ABC Model SelectionConsiderate Approaches to ABC Model Selection
Considerate Approaches to ABC Model SelectionMichael Stumpf
 
WSC 2011, advanced tutorial on simulation in Statistics
WSC 2011, advanced tutorial on simulation in StatisticsWSC 2011, advanced tutorial on simulation in Statistics
WSC 2011, advanced tutorial on simulation in StatisticsChristian Robert
 
from model uncertainty to ABC
from model uncertainty to ABCfrom model uncertainty to ABC
from model uncertainty to ABCChristian Robert
 
Mcmc & lkd free II
Mcmc & lkd free IIMcmc & lkd free II
Mcmc & lkd free IIDeb Roy
 
3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
3rd NIPS Workshop on PROBABILISTIC PROGRAMMING3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
3rd NIPS Workshop on PROBABILISTIC PROGRAMMINGChristian Robert
 
RSS Read Paper by Mark Girolami
RSS Read Paper by Mark GirolamiRSS Read Paper by Mark Girolami
RSS Read Paper by Mark GirolamiChristian Robert
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Christian Robert
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Christian Robert
 

Similar to ABC and empirical likelihood (20)

ABC & Empirical Lkd
ABC & Empirical LkdABC & Empirical Lkd
ABC & Empirical Lkd
 
(Approximate) Bayesian computation as a new empirical Bayes (something)?
(Approximate) Bayesian computation as a new empirical Bayes (something)?(Approximate) Bayesian computation as a new empirical Bayes (something)?
(Approximate) Bayesian computation as a new empirical Bayes (something)?
 
Pittsburgh and Toronto "Halloween US trip" seminars
Pittsburgh and Toronto "Halloween US trip" seminarsPittsburgh and Toronto "Halloween US trip" seminars
Pittsburgh and Toronto "Halloween US trip" seminars
 
slides of ABC talk at i-like workshop, Warwick, May 16
slides of ABC talk at i-like workshop, Warwick, May 16slides of ABC talk at i-like workshop, Warwick, May 16
slides of ABC talk at i-like workshop, Warwick, May 16
 
NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015NBBC15, Reyjavik, June 08, 2015
NBBC15, Reyjavik, June 08, 2015
 
Approximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forestsApproximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forests
 
[A]BCel : a presentation at ABC in Roma
[A]BCel : a presentation at ABC in Roma[A]BCel : a presentation at ABC in Roma
[A]BCel : a presentation at ABC in Roma
 
Desy
DesyDesy
Desy
 
Considerate Approaches to ABC Model Selection
Considerate Approaches to ABC Model SelectionConsiderate Approaches to ABC Model Selection
Considerate Approaches to ABC Model Selection
 
WSC 2011, advanced tutorial on simulation in Statistics
WSC 2011, advanced tutorial on simulation in StatisticsWSC 2011, advanced tutorial on simulation in Statistics
WSC 2011, advanced tutorial on simulation in Statistics
 
from model uncertainty to ABC
from model uncertainty to ABCfrom model uncertainty to ABC
from model uncertainty to ABC
 
Mcmc & lkd free II
Mcmc & lkd free IIMcmc & lkd free II
Mcmc & lkd free II
 
3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
3rd NIPS Workshop on PROBABILISTIC PROGRAMMING3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
MUMS: Bayesian, Fiducial, and Frequentist Conference - Multidimensional Monot...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Multidimensional Monot...MUMS: Bayesian, Fiducial, and Frequentist Conference - Multidimensional Monot...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Multidimensional Monot...
 
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
 
the ABC of ABC
the ABC of ABCthe ABC of ABC
the ABC of ABC
 
RSS Read Paper by Mark Girolami
RSS Read Paper by Mark GirolamiRSS Read Paper by Mark Girolami
RSS Read Paper by Mark Girolami
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 

More from Christian Robert

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceChristian Robert
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinChristian Robert
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?Christian Robert
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Christian Robert
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Christian Robert
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihoodChristian Robert
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)Christian Robert
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerChristian Robert
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussionChristian Robert
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceChristian Robert
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsChristian Robert
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distancesChristian Robert
 

More from Christian Robert (20)

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
 
discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like sampler
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distances
 

Recently uploaded

Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin ClassesCeline George
 
Single or Multiple melodic lines structure
Single or Multiple melodic lines structureSingle or Multiple melodic lines structure
Single or Multiple melodic lines structuredhanjurrannsibayan2
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsKarakKing
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...pradhanghanshyam7136
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsMebane Rash
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxRamakrishna Reddy Bijjam
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfNirmal Dwivedi
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and ModificationsMJDuyan
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentationcamerronhm
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...ZurliaSoop
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...Poonam Aher Patil
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptxMaritesTamaniVerdade
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Jisc
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.MaryamAhmad92
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.christianmathematics
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Association for Project Management
 

Recently uploaded (20)

Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Single or Multiple melodic lines structure
Single or Multiple melodic lines structureSingle or Multiple melodic lines structure
Single or Multiple melodic lines structure
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...
 

ABC and empirical likelihood

  • 1. Approximate Bayesian Computation (ABC): model choice and empirical likelihood Christian P. Robert ICERM, “Computational Challenges in Probability”, Nov. 28, 2012 Universit´ Paris-Dauphine, IuF, & CREST e Joint works with J.-M. Cornuet, J.-M. Marin, K.L. Mengersen, N. Pillai, P. Pudlo and J. Rousseau bayesianstatistics@gmail.com
  • 2. Advertisement MCMSki IV to be held in Chamonix Mt Blanc, France, from Monday, Jan. 6 to Wed., Jan. 8, 2014 All aspects of MCMC++ theory and methodology Parallel (invited and contributed) sessions: call for proposals on website http://www.pages.drexel.edu/ mwl25/mcmski/
  • 3. Outline Introduction ABC ABC as an inference machine ABC for model choice Model choice consistency ABCel
  • 4. Intractable likelihood Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1 , . . . , yn |θ) is (really!) not available in closed form can (easily!) be neither completed nor demarginalised cannot be estimated by an unbiased estimator c Prohibits direct implementation of a generic MCMC algorithm like Metropolis–Hastings
  • 5. Intractable likelihood Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1 , . . . , yn |θ) is (really!) not available in closed form can (easily!) be neither completed nor demarginalised cannot be estimated by an unbiased estimator c Prohibits direct implementation of a generic MCMC algorithm like Metropolis–Hastings
  • 6. Different perspectives on abc What is the (most) fundamental issue? a mere computational issue (that will eventually end up being solved by more powerful computers, &tc, even if too costly in the short term) an inferential issue (opening opportunities for new inference machine, with different legitimity than classical B approach) a Bayesian conundrum (while inferential methods available, how closely related to the B approach?)
  • 7. Different perspectives on abc What is the (most) fundamental issue? a mere computational issue (that will eventually end up being solved by more powerful computers, &tc, even if too costly in the short term) an inferential issue (opening opportunities for new inference machine, with different legitimity than classical B approach) a Bayesian conundrum (while inferential methods available, how closely related to the B approach?)
  • 8. Different perspectives on abc What is the (most) fundamental issue? a mere computational issue (that will eventually end up being solved by more powerful computers, &tc, even if too costly in the short term) an inferential issue (opening opportunities for new inference machine, with different legitimity than classical B approach) a Bayesian conundrum (while inferential methods available, how closely related to the B approach?)
  • 9. Econom’ections Similar exploration of simulation-based and approximation techniques in Econometrics Simulated method of moments Method of simulated moments Simulated pseudo-maximum-likelihood Indirect inference [Gouri´roux & Monfort, 1996] e even though motivation is partly-defined models rather than complex likelihoods
  • 10. Econom’ections Similar exploration of simulation-based and approximation techniques in Econometrics Simulated method of moments Method of simulated moments Simulated pseudo-maximum-likelihood Indirect inference [Gouri´roux & Monfort, 1996] e even though motivation is partly-defined models rather than complex likelihoods
  • 11. Indirect inference ^ Minimise [in θ] a distance between estimators β based on a pseudo-model for genuine observations and for observations simulated under the true model and the parameter θ. [Gouri´roux, Monfort, & Renault, 1993; e Smith, 1993; Gallant & Tauchen, 1996]
  • 12. Indirect inference (PML vs. PSE) Example of the pseudo-maximum-likelihood (PML) ^ β(y) = arg max log f (yt |β, y1:(t−1) ) β t leading to arg min ||β(yo ) − β(y1 (θ), . . . , yS (θ))||2 ^ ^ θ when ys (θ) ∼ f (y|θ) s = 1, . . . , S
  • 13. Indirect inference (PML vs. PSE) Example of the pseudo-score-estimator (PSE) 2 ∂ log f ^ β(y) = arg min (yt |β, y1:(t−1) ) β t ∂β leading to arg min ||β(yo ) − β(y1 (θ), . . . , yS (θ))||2 ^ ^ θ when ys (θ) ∼ f (y|θ) s = 1, . . . , S
  • 14. Consistent indirect inference “...in order to get a unique solution the dimension of the auxiliary parameter β must be larger than or equal to the dimension of the initial parameter θ. If the problem is just identified the different methods become easier...” Consistency depending on the criterion and on the asymptotic identifiability of θ [Gouri´roux & Monfort, 1996, p. 66] e
  • 15. Consistent indirect inference “...in order to get a unique solution the dimension of the auxiliary parameter β must be larger than or equal to the dimension of the initial parameter θ. If the problem is just identified the different methods become easier...” Consistency depending on the criterion and on the asymptotic identifiability of θ [Gouri´roux & Monfort, 1996, p. 66] e
  • 16. Choice of pseudo-model Arbitrariness of pseudo-model Pick model such that ^ 1. β(θ) not flat (i.e. sensitive to changes in θ) ^ 2. β(θ) not dispersed (i.e. robust agains changes in ys (θ)) [Frigessi & Heggland, 2004]
  • 17. Approximate Bayesian computation Introduction ABC Genesis of ABC ABC basics Advances and interpretations ABC as knn ABC as an inference machine ABC for model choice Model choice consistency ABCel
  • 18. Genetic background of ABC skip genetics ABC is a recent computational technique that only requires being able to sample from the likelihood f (·|θ) This technique stemmed from population genetics models, about 15 years ago, and population geneticists still contribute significantly to methodological developments of ABC. [Griffith & al., 1997; Tavar´ & al., 1999] e
  • 19. Demo-genetic inference Each model is characterized by a set of parameters θ that cover historical (time divergence, admixture time ...), demographics (population sizes, admixture rates, migration rates, ...) and genetic (mutation rate, ...) factors The goal is to estimate these parameters from a dataset of polymorphism (DNA sample) y observed at the present time Problem: most of the time, we cannot calculate the likelihood of the polymorphism data f (y|θ)...
  • 20. Demo-genetic inference Each model is characterized by a set of parameters θ that cover historical (time divergence, admixture time ...), demographics (population sizes, admixture rates, migration rates, ...) and genetic (mutation rate, ...) factors The goal is to estimate these parameters from a dataset of polymorphism (DNA sample) y observed at the present time Problem: most of the time, we cannot calculate the likelihood of the polymorphism data f (y|θ)...
  • 21. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2 Sample of 8 genes
  • 22. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Kingman’s genealogy When time axis is normalized, T (k) ∼ Exp(k(k − 1)/2) Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2
  • 23. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Kingman’s genealogy When time axis is normalized, T (k) ∼ Exp(k(k − 1)/2) Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2
  • 24. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Kingman’s genealogy When time axis is normalized, T (k) ∼ Exp(k(k − 1)/2) Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches Observations: leafs of the tree • MRCA = 100 ^ θ=? • independent mutations: ±1 with pr. 1/2
  • 25. Much more interesting models. . . several independent locus Independent gene genealogies and mutations different populations linked by an evolutionary scenario made of divergences, admixtures, migrations between populations, etc. larger sample size usually between 50 and 100 genes MRCA τ2 τ1 A typical evolutionary scenario: POP 0 POP 1 POP 2
  • 26. Intractable likelihood Missing (too missing!) data structure: f (y|θ) = f (y|G , θ)f (G |θ)dG G cannot be computed in a manageable way... The genealogies are considered as nuisance parameters This modelling clearly differs from the phylogenetic perspective where the tree is the parameter of interest.
  • 27. Intractable likelihood Missing (too missing!) data structure: f (y|θ) = f (y|G , θ)f (G |θ)dG G cannot be computed in a manageable way... The genealogies are considered as nuisance parameters This modelling clearly differs from the phylogenetic perspective where the tree is the parameter of interest.
  • 28. a dubious ancestry... You went to school to learn, girl (. . . ) Why 2 plus 2 makes four Now, now, now, I’m gonna teach you (. . . ) All you gotta do is repeat after me! A, B, C! It’s easy as 1, 2, 3! Or simple as Do, Re, Mi! (. . . )
  • 29. A?B?C? A stands for approximate [wrong likelihood / picture] B stands for Bayesian C stands for computation [producing a parameter sample]
  • 30. A?B?C? A stands for approximate [wrong likelihood / picture] B stands for Bayesian C stands for computation [producing a parameter sample]
  • 31. A?B?C? ESS=155.6 ESS=75.93 ESS=76.87 4 2.0 3 Density Density Density 1.0 2 1.0 1 0.0 0.0 0 A stands for approximate −0.5 0.0 θ ESS=91.54 0.5 1.0 −0.4 −0.2 0.0 θ ESS=108.4 0.2 0.4 −0.4 −0.2 θ ESS=85.13 0.0 0.2 4 0.0 1.0 2.0 3.0 0.0 1.0 2.0 3.0 3 [wrong likelihood / Density Density Density 2 1 0 picture] −0.6 −0.4 −0.2 θ ESS=149.1 0.0 0.2 −0.4 0.0 θ ESS=96.31 0.2 0.4 0.6 −0.2 0.0 θ ESS=83.77 0.2 0.4 0.6 4 2.0 3 2.0 Density Density Density 2 1.0 1.0 B stands for Bayesian 1 0.0 0.0 0 −0.5 0.0 0.5 1.0 −0.4 0.0 0.2 0.4 0.6 −0.6 −0.4 −0.2 0.0 0.2 0.4 θ ESS=155.7 θ ESS=92.42 θ ESS=95.01 0.0 1.0 2.0 3.0 2.0 3.0 C stands for computation Density Density Density 1.0 1.5 0.0 0.0 [producing a parameter −0.5 θ 0.0 ESS=139.2 0.5 −0.4 0.0 θ ESS=99.33 0.2 0.4 0.6 −0.4 θ 0.0 ESS=87.28 0.2 0.4 0.6 2.0 0.0 1.0 2.0 3.0 3 sample] Density Density Density 2 1.0 1 0.0 0 −0.6 −0.2 0.2 0.6 −0.4 −0.2 0.0 0.2 0.4 −0.2 0.0 0.2 0.4 0.6
  • 32. How Bayesian is aBc? Could we turn the resolution into a Bayesian answer? ideally so (not meaningfull: requires ∞-ly powerful computer asymptotically so (when sample size goes to ∞: meaningfull?) approximation error unknown (w/o costly simulation) true Bayes for wrong model (formal and artificial) true Bayes for estimated likelihood (back to econometrics?)
  • 33. Untractable likelihood Back to stage zero: what can we do when a likelihood function f (y|θ) is well-defined but impossible / too costly to compute...? MCMC cannot be implemented! shall we give up Bayesian inference altogether?! or settle for an almost Bayesian inference/picture...?
  • 34. Untractable likelihood Back to stage zero: what can we do when a likelihood function f (y|θ) is well-defined but impossible / too costly to compute...? MCMC cannot be implemented! shall we give up Bayesian inference altogether?! or settle for an almost Bayesian inference/picture...?
  • 35. ABC methodology Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: Foundation For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y, then the selected θ ∼ π(θ|y) [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997] e
  • 36. ABC methodology Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: Foundation For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y, then the selected θ ∼ π(θ|y) [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997] e
  • 37. ABC methodology Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: Foundation For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y, then the selected θ ∼ π(θ|y) [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997] e
  • 38. A as A...pproximative When y is a continuous random variable, strict equality z = y is replaced with a tolerance zone ρ(y, z) where ρ is a distance Output distributed from def π(θ) Pθ {ρ(y, z) < } ∝ π(θ|ρ(y, z) < ) [Pritchard et al., 1999]
  • 39. A as A...pproximative When y is a continuous random variable, strict equality z = y is replaced with a tolerance zone ρ(y, z) where ρ is a distance Output distributed from def π(θ) Pθ {ρ(y, z) < } ∝ π(θ|ρ(y, z) < ) [Pritchard et al., 1999]
  • 40. ABC algorithm In most implementations, further degree of A...pproximation: Algorithm 1 Likelihood-free rejection sampler for i = 1 to N do repeat generate θ from the prior distribution π(·) generate z from the likelihood f (·|θ ) until ρ{η(z), η(y)} set θi = θ end for where η(y) defines a (not necessarily sufficient) statistic
  • 41. Output The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) . ...does it?!
  • 42. Output The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) . ...does it?!
  • 43. Output The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) . ...does it?!
  • 44. Output The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the restricted posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) . Not so good..! skip convergence details!
  • 45. Convergence of ABC What happens when → 0? For B ⊂ Θ, we have A f (z|θ)dz f (z|θ)π(θ)dθ ,y B π(θ)dθ = dz B A ,y ×Θ π(θ)f (z|θ)dzdθ A ,y A ,y ×Θ π(θ)f (z|θ)dzdθ B f (z|θ)π(θ)dθ m(z) = dz A ,y m(z) A ,y ×Θ π(θ)f (z|θ)dzdθ m(z) = π(B|z) dz A ,y A ,y ×Θ π(θ)f (z|θ)dzdθ which indicates convergence for a continuous π(B|z).
  • 46. Convergence of ABC What happens when → 0? For B ⊂ Θ, we have A f (z|θ)dz f (z|θ)π(θ)dθ ,y B π(θ)dθ = dz B A ,y ×Θ π(θ)f (z|θ)dzdθ A ,y A ,y ×Θ π(θ)f (z|θ)dzdθ B f (z|θ)π(θ)dθ m(z) = dz A ,y m(z) A ,y ×Θ π(θ)f (z|θ)dzdθ m(z) = π(B|z) dz A ,y A ,y ×Θ π(θ)f (z|θ)dzdθ which indicates convergence for a continuous π(B|z).
  • 47. Convergence (do not attempt!) ...and the above does not apply to insufficient statistics: If η(y) is not a sufficient statistics, the best one can hope for is π(θ|η(y)) , not π(θ|y) If η(y) is an ancillary statistic, the whole information contained in y is lost!, the “best” one can “hope” for is π(θ|η(y)) = π(θ) Bummer!!!
  • 48. Convergence (do not attempt!) ...and the above does not apply to insufficient statistics: If η(y) is not a sufficient statistics, the best one can hope for is π(θ|η(y)) , not π(θ|y) If η(y) is an ancillary statistic, the whole information contained in y is lost!, the “best” one can “hope” for is π(θ|η(y)) = π(θ) Bummer!!!
  • 49. Convergence (do not attempt!) ...and the above does not apply to insufficient statistics: If η(y) is not a sufficient statistics, the best one can hope for is π(θ|η(y)) , not π(θ|y) If η(y) is an ancillary statistic, the whole information contained in y is lost!, the “best” one can “hope” for is π(θ|η(y)) = π(θ) Bummer!!!
  • 50. Convergence (do not attempt!) ...and the above does not apply to insufficient statistics: If η(y) is not a sufficient statistics, the best one can hope for is π(θ|η(y)) , not π(θ|y) If η(y) is an ancillary statistic, the whole information contained in y is lost!, the “best” one can “hope” for is π(θ|η(y)) = π(θ) Bummer!!!
  • 51. MA example Inference on the parameters of a MA(q) model q xt = t + ϑi t−i t−i i.i.d.w.n. i=1 bypass MA illustration Simple prior: uniform over the inverse [real and complex] roots in q Q(u) = 1 − ϑi u i i=1 under the identifiability conditions
  • 52. MA example Inference on the parameters of a MA(q) model q xt = t + ϑi t−i t−i i.i.d.w.n. i=1 bypass MA illustration Simple prior: uniform prior over the identifiability zone in the parameter space, i.e. triangle for MA(2)
  • 53. MA example (2) ABC algorithm thus made of 1. picking a new value (ϑ1 , ϑ2 ) in the triangle 2. generating an iid sequence ( t )−q<t T 3. producing a simulated series (xt )1 t T Distance: basic distance between the series T ρ((xt )1 t T , (xt )1 t T) = (xt − xt )2 t=1 or distance between summary statistics like the q = 2 autocorrelations T τj = xt xt−j t=j+1
  • 54. MA example (2) ABC algorithm thus made of 1. picking a new value (ϑ1 , ϑ2 ) in the triangle 2. generating an iid sequence ( t )−q<t T 3. producing a simulated series (xt )1 t T Distance: basic distance between the series T ρ((xt )1 t T , (xt )1 t T) = (xt − xt )2 t=1 or distance between summary statistics like the q = 2 autocorrelations T τj = xt xt−j t=j+1
  • 55. Comparison of distance impact Impact of tolerance on ABC sample against either distance ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 56. Comparison of distance impact 4 1.5 3 1.0 2 0.5 1 0.0 0 0.0 0.2 0.4 0.6 0.8 −2.0 −1.0 0.0 0.5 1.0 1.5 θ1 θ2 Impact of tolerance on ABC sample against either distance ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 57. Comparison of distance impact 4 1.5 3 1.0 2 0.5 1 0.0 0 0.0 0.2 0.4 0.6 0.8 −2.0 −1.0 0.0 0.5 1.0 1.5 θ1 θ2 Impact of tolerance on ABC sample against either distance ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 58. Comments Role of distance paramount (because = 0) Scaling of components of η(y) is also determinant matters little if “small enough” representative of “curse of dimensionality” small is beautiful! the data as a whole may be paradoxically weakly informative for ABC
  • 59. ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y ... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ ] [Ratmann et al., 2009]
  • 60. ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y ... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ ] [Ratmann et al., 2009]
  • 61. ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y ... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ ] [Ratmann et al., 2009]
  • 62. ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y ... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ ] [Ratmann et al., 2009]
  • 63. ABC-NP Better usage of [prior] simulations by adjustement: instead of throwing away θ such that ρ(η(z), η(y)) > , replace θ’s with locally regressed transforms θ∗ = θ − {η(z) − η(y)}T β ^ [Csill´ry et al., TEE, 2010] e ^ where β is obtained by [NP] weighted least square regression on (η(z) − η(y)) with weights Kδ {ρ(η(z), η(y))} [Beaumont et al., 2002, Genetics]
  • 64. ABC-NP (regression) Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) : weight directly simulation by Kδ {ρ(η(z(θ)), η(y))} or S 1 Kδ {ρ(η(zs (θ)), η(y))} S s=1 [consistent estimate of f (η|θ)] Curse of dimensionality: poor estimate when d = dim(η) is large...
  • 65. ABC-NP (regression) Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) : weight directly simulation by Kδ {ρ(η(z(θ)), η(y))} or S 1 Kδ {ρ(η(zs (θ)), η(y))} S s=1 [consistent estimate of f (η|θ)] Curse of dimensionality: poor estimate when d = dim(η) is large...
  • 66. ABC-NP (density estimation) Use of the kernel weights Kδ {ρ(η(z(θ)), η(y))} leads to the NP estimate of the posterior expectation i θi Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))} [Blum, JASA, 2010]
  • 67. ABC-NP (density estimation) Use of the kernel weights Kδ {ρ(η(z(θ)), η(y))} leads to the NP estimate of the posterior conditional density i ˜ Kb (θi − θ)Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))} [Blum, JASA, 2010]
  • 68. ABC-NP (density estimations) Other versions incorporating regression adjustments i ˜ Kb (θ∗ − θ)Kδ {ρ(η(z(θi )), η(y))} i i Kδ {ρ(η(z(θi )), η(y))} In all cases, error E[^ (θ|y)] − g (θ|y) = cb 2 + cδ2 + OP (b 2 + δ2 ) + OP (1/nδd ) g c var(^ (θ|y)) = g (1 + oP (1)) nbδd
  • 69. ABC-NP (density estimations) Other versions incorporating regression adjustments i ˜ Kb (θ∗ − θ)Kδ {ρ(η(z(θi )), η(y))} i i Kδ {ρ(η(z(θi )), η(y))} In all cases, error E[^ (θ|y)] − g (θ|y) = cb 2 + cδ2 + OP (b 2 + δ2 ) + OP (1/nδd ) g c var(^ (θ|y)) = g (1 + oP (1)) nbδd [Blum, JASA, 2010]
  • 70. ABC-NP (density estimations) Other versions incorporating regression adjustments i ˜ Kb (θ∗ − θ)Kδ {ρ(η(z(θi )), η(y))} i i Kδ {ρ(η(z(θi )), η(y))} In all cases, error E[^ (θ|y)] − g (θ|y) = cb 2 + cδ2 + OP (b 2 + δ2 ) + OP (1/nδd ) g c var(^ (θ|y)) = g (1 + oP (1)) nbδd [standard NP calculations]
  • 71. ABC-NCH Incorporating non-linearities and heterocedasticities: σ(η(y)) ^ θ∗ = m(η(y)) + [θ − m(η(z))] ^ ^ σ(η(z)) ^ where m(η) estimated by non-linear regression (e.g., neural network) ^ σ(η) estimated by non-linear regression on residuals ^ log{θi − m(ηi )}2 = log σ2 (ηi ) + ξi ^ [Blum & Fran¸ois, 2009] c
  • 72. ABC-NCH Incorporating non-linearities and heterocedasticities: σ(η(y)) ^ θ∗ = m(η(y)) + [θ − m(η(z))] ^ ^ σ(η(z)) ^ where m(η) estimated by non-linear regression (e.g., neural network) ^ σ(η) estimated by non-linear regression on residuals ^ log{θi − m(ηi )}2 = log σ2 (ηi ) + ξi ^ [Blum & Fran¸ois, 2009] c
  • 73. ABC as knn [Biau et al., 2012, arxiv:1207.6461] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα (d1 , . . . , dN ) Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸ois, 2010] c ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965]
  • 74. ABC as knn [Biau et al., 2012, arxiv:1207.6461] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα (d1 , . . . , dN ) Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸ois, 2010] c ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965]
  • 75. ABC as knn [Biau et al., 2012, arxiv:1207.6461] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα (d1 , . . . , dN ) Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸ois, 2010] c ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965]
  • 76. ABC consistency Provided kN / log log N −→ ∞ and kN /N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, kN 1 ϕ(θj ) −→ E[ϕ(θj )|S = s0 ] kN j=1 [Devroye, 1982] Biau et al. (2012) also recall pointwise and integrated mean square error consistency results on the corresponding kernel estimate of the conditional posterior distribution, under constraints p kN → ∞, kN /N → 0, hN → 0 and hN kN → ∞,
  • 77. ABC consistency Provided kN / log log N −→ ∞ and kN /N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, kN 1 ϕ(θj ) −→ E[ϕ(θj )|S = s0 ] kN j=1 [Devroye, 1982] Biau et al. (2012) also recall pointwise and integrated mean square error consistency results on the corresponding kernel estimate of the conditional posterior distribution, under constraints p kN → ∞, kN /N → 0, hN → 0 and hN kN → ∞,
  • 78. Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like 4 when m = 1, 2, 3, kN ≈ N (p+4)/(p+8) and rate N − p+8 4 when m = 4, kN ≈ N (p+4)/(p+8) and rate N − p+8 log N 4 when m > 4, kN ≈ N (p+4)/(m+p+4) and rate N − m+p+4 [Biau et al., 2012, arxiv:1207.6461] Only applies to sufficient summary statistics
  • 79. Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like 4 when m = 1, 2, 3, kN ≈ N (p+4)/(p+8) and rate N − p+8 4 when m = 4, kN ≈ N (p+4)/(p+8) and rate N − p+8 log N 4 when m > 4, kN ≈ N (p+4)/(m+p+4) and rate N − m+p+4 [Biau et al., 2012, arxiv:1207.6461] Only applies to sufficient summary statistics
  • 80. ABC inference machine Introduction ABC ABC as an inference machine Error inc. Exact BC and approximate targets summary statistic ABC for model choice Model choice consistency ABCel
  • 81. How much Bayesian aBc is..? maybe a convergent method of inference (meaningful? sufficient? foreign?) approximation error unknown (w/o simulation) pragmatic Bayes (there is no other solution!) many calibration issues (tolerance, distance, statistics)
  • 82. How much Bayesian aBc is..? maybe a convergent method of inference (meaningful? sufficient? foreign?) approximation error unknown (w/o simulation) pragmatic Bayes (there is no other solution!) many calibration issues (tolerance, distance, statistics) ...should Bayesians care?!
  • 83. How much Bayesian aBc is..? maybe a convergent method of inference (meaningful? sufficient? foreign?) approximation error unknown (w/o simulation) pragmatic Bayes (there is no other solution!) many calibration issues (tolerance, distance, statistics) yes they should!!!
  • 84. How much Bayesian aBc is..? maybe a convergent method of inference (meaningful? sufficient? foreign?) approximation error unknown (w/o simulation) pragmatic Bayes (there is no other solution!) many calibration issues (tolerance, distance, statistics) to ABCel
  • 85. ABCµ Idea Infer about the error as well as about the parameter: Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ) Warning! Replacement of ξ( |y, θ) with a non-parametric kernel approximation. [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  • 86. ABCµ Idea Infer about the error as well as about the parameter: Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ) Warning! Replacement of ξ( |y, θ) with a non-parametric kernel approximation. [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  • 87. ABCµ Idea Infer about the error as well as about the parameter: Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ) Warning! Replacement of ξ( |y, θ) with a non-parametric kernel approximation. [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  • 88. ABCµ details Multidimensional distances ρk (k = 1, . . . , K ) and errors k = ρk (ηk (z), ηk (y)), with 1 k ∼ ξk ( |y, θ) ≈ ξk ( |y, θ) = ^ K [{ k −ρk (ηk (zb ), ηk (y))}/hk ] Bhk b then used in replacing ξ( |y, θ) with mink ξk ( |y, θ) ^ ABCµ involves acceptance probability π(θ , ) q(θ , θ)q( , ) mink ξk ( |y, θ ) ^ π(θ, ) q(θ, θ )q( , ) mink ξk ( |y, θ) ^
  • 89. ABCµ details Multidimensional distances ρk (k = 1, . . . , K ) and errors k = ρk (ηk (z), ηk (y)), with 1 k ∼ ξk ( |y, θ) ≈ ξk ( |y, θ) = ^ K [{ k −ρk (ηk (zb ), ηk (y))}/hk ] Bhk b then used in replacing ξ( |y, θ) with mink ξk ( |y, θ) ^ ABCµ involves acceptance probability π(θ , ) q(θ , θ)q( , ) mink ξk ( |y, θ ) ^ π(θ, ) q(θ, θ )q( , ) mink ξk ( |y, θ) ^
  • 90. ABCµ multiple errors [ c Ratmann et al., PNAS, 2009]
  • 91. ABCµ for model choice [ c Ratmann et al., PNAS, 2009]
  • 92. Wilkinson’s exact BC (not exactly!) ABC approximation error (i.e. non-zero tolerance) replaced with exact simulation from a controlled approximation to the target, convolution of true posterior with kernel function π(θ)f (z|θ)K (y − z) π (θ, z|y) = , π(θ)f (z|θ)K (y − z)dzdθ with K kernel parameterised by bandwidth . [Wilkinson, 2008] Theorem The ABC algorithm based on the assumption of a randomised observation y = y + ξ, ξ ∼ K , and an acceptance probability of ˜ K (y − z)/M gives draws from the posterior distribution π(θ|y).
  • 93. Wilkinson’s exact BC (not exactly!) ABC approximation error (i.e. non-zero tolerance) replaced with exact simulation from a controlled approximation to the target, convolution of true posterior with kernel function π(θ)f (z|θ)K (y − z) π (θ, z|y) = , π(θ)f (z|θ)K (y − z)dzdθ with K kernel parameterised by bandwidth . [Wilkinson, 2008] Theorem The ABC algorithm based on the assumption of a randomised observation y = y + ξ, ξ ∼ K , and an acceptance probability of ˜ K (y − z)/M gives draws from the posterior distribution π(θ|y).
  • 94. How exact a BC? “Using to represent measurement error is straightforward, whereas using to model the model discrepancy is harder to conceptualize and not as commonly used” [Richard Wilkinson, 2008]
  • 95. How exact a BC? Pros Pseudo-data from true model and observed data from noisy model Interesting perspective in that outcome is completely controlled Link with ABCµ and assuming y is observed with a measurement error with density K Relates to the theory of model approximation [Kennedy & O’Hagan, 2001] Cons Requires K to be bounded by M True approximation error never assessed Requires a modification of the standard ABC algorithm
  • 96. ABC for HMMs Specific case of a hidden Markov model Xt+1 ∼ Qθ (Xt , ·) Yt+1 ∼ gθ (·|xt ) where only y0 is observed. 1:n [Dean, Singh, Jasra, & Peters, 2011] Use of specific constraints, adapted to the Markov structure: y1 ∈ B(y1 , ) × · · · × yn ∈ B(yn , ) 0 0
  • 97. ABC for HMMs Specific case of a hidden Markov model Xt+1 ∼ Qθ (Xt , ·) Yt+1 ∼ gθ (·|xt ) where only y0 is observed. 1:n [Dean, Singh, Jasra, & Peters, 2011] Use of specific constraints, adapted to the Markov structure: y1 ∈ B(y1 , ) × · · · × yn ∈ B(yn , ) 0 0
  • 98. ABC-MLE for HMMs ABC-MLE defined by θn = arg max Pθ Y1 ∈ B(y1 , ), . . . , Yn ∈ B(yn , ) ^ 0 0 θ Exact MLE for the likelihood same basis as Wilkinson! 0 pθ (y1 , . . . , yn ) corresponding to the perturbed process (xt , yt + zt )1 t n zt ∼ U(B(0, 1) [Dean, Singh, Jasra, & Peters, 2011]
  • 99. ABC-MLE for HMMs ABC-MLE defined by θn = arg max Pθ Y1 ∈ B(y1 , ), . . . , Yn ∈ B(yn , ) ^ 0 0 θ Exact MLE for the likelihood same basis as Wilkinson! 0 pθ (y1 , . . . , yn ) corresponding to the perturbed process (xt , yt + zt )1 t n zt ∼ U(B(0, 1) [Dean, Singh, Jasra, & Peters, 2011]
  • 100. ABC-MLE is biased ABC-MLE is asymptotically (in n) biased with target l (θ) = Eθ∗ [log pθ (Y1 |Y−∞:0 )] but ABC-MLE converges to true value in the sense l n (θn ) → l (θ) for all sequences (θn ) converging to θ and n
  • 101. ABC-MLE is biased ABC-MLE is asymptotically (in n) biased with target l (θ) = Eθ∗ [log pθ (Y1 |Y−∞:0 )] but ABC-MLE converges to true value in the sense l n (θn ) → l (θ) for all sequences (θn ) converging to θ and n
  • 102. Noisy ABC-MLE Idea: Modify instead the data from the start 0 (y1 + ζ1 , . . . , yn + ζn ) [ see Fearnhead-Prangle ] noisy ABC-MLE estimate arg max Pθ Y1 ∈ B(y1 + ζ1 , ), . . . , Yn ∈ B(yn + ζn , ) 0 0 θ [Dean, Singh, Jasra, & Peters, 2011]
  • 103. Consistent noisy ABC-MLE Degrading the data improves the estimation performances: Noisy ABC-MLE is asymptotically (in n) consistent under further assumptions, the noisy ABC-MLE is asymptotically normal increase in variance of order −2 likely degradation in precision or computing time due to the lack of summary statistic [curse of dimensionality]
  • 104. SMC for ABC likelihood Algorithm 2 SMC ABC for HMMs Given θ for k = 1, . . . , n do 1 1 N N generate proposals (xk , yk ), . . . , (xk , yk ) from the model weigh each proposal with ωk l =I l B(yk + ζk , ) (yk ) 0 l renormalise the weights and sample the xk ’s accordingly end for approximate the likelihood by n N ωlk N k=1 l=1 [Jasra, Singh, Martin, & McCoy, 2010]
  • 105. Which summary? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistics Starting from a large collection of summary statistics is available, Joyce and Marjoram (2008) consider the sequential inclusion into the ABC target, with a stopping rule based on a likelihood ratio test Not taking into account the sequential nature of the tests Depends on parameterisation Order of inclusion matters likelihood ratio test?!
  • 106. Which summary? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistics Starting from a large collection of summary statistics is available, Joyce and Marjoram (2008) consider the sequential inclusion into the ABC target, with a stopping rule based on a likelihood ratio test Not taking into account the sequential nature of the tests Depends on parameterisation Order of inclusion matters likelihood ratio test?!
  • 107. Which summary? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistics Starting from a large collection of summary statistics is available, Joyce and Marjoram (2008) consider the sequential inclusion into the ABC target, with a stopping rule based on a likelihood ratio test Not taking into account the sequential nature of the tests Depends on parameterisation Order of inclusion matters likelihood ratio test?!
  • 108. Which summary for model choice? Depending on the choice of η(·), the Bayes factor based on this insufficient statistic, η π1 (θ1 )f1η (η(y)|θ1 ) dθ1 B12 (y) = , π2 (θ2 )f2η (η(y)|θ2 ) dθ2 is consistent or not. [X, Cornuet, Marin, & Pillai, 2012] Consistency only depends on the range of Ei [η(y)] under both models. [Marin, Pillai, X, & Rousseau, 2012]
  • 109. Which summary for model choice? Depending on the choice of η(·), the Bayes factor based on this insufficient statistic, η π1 (θ1 )f1η (η(y)|θ1 ) dθ1 B12 (y) = , π2 (θ2 )f2η (η(y)|θ2 ) dθ2 is consistent or not. [X, Cornuet, Marin, & Pillai, 2012] Consistency only depends on the range of Ei [η(y)] under both models. [Marin, Pillai, X, & Rousseau, 2012]
  • 110. Semi-automatic ABC Fearnhead and Prangle (2010) study ABC and the selection of the summary statistic in close proximity to Wilkinson’s proposal ABC considered as inferential method and calibrated as such randomised (or ‘noisy’) version of the summary statistics ˜ η(y) = η(y) + τ derivation of a well-calibrated version of ABC, i.e. an algorithm that gives proper predictions for the distribution associated with this randomised summary statistic
  • 111. Summary [of F&P/statistics) optimality of the posterior expectation E[θ|y] of the parameter of interest as summary statistics η(y)! use of the standard quadratic loss function (θ − θ0 )T A(θ − θ0 ) . recent extension to model choice, optimality of Bayes factor B12 (y) [F&P, ISBA 2012 talk]
  • 112. Summary [of F&P/statistics) optimality of the posterior expectation E[θ|y] of the parameter of interest as summary statistics η(y)! use of the standard quadratic loss function (θ − θ0 )T A(θ − θ0 ) . recent extension to model choice, optimality of Bayes factor B12 (y) [F&P, ISBA 2012 talk]
  • 113. Conclusion Choice of summary statistics is paramount for ABC validation/performance At best, ABC approximates π(. | η(y)) Model selection feasible with ABC [with caution!] For estimation, consistency if {θ; µ(θ) = µ0 } = θ0 For testing consistency if {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅ [Marin et al., 2011]
  • 114. Conclusion Choice of summary statistics is paramount for ABC validation/performance At best, ABC approximates π(. | η(y)) Model selection feasible with ABC [with caution!] For estimation, consistency if {θ; µ(θ) = µ0 } = θ0 For testing consistency if {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅ [Marin et al., 2011]
  • 115. Conclusion Choice of summary statistics is paramount for ABC validation/performance At best, ABC approximates π(. | η(y)) Model selection feasible with ABC [with caution!] For estimation, consistency if {θ; µ(θ) = µ0 } = θ0 For testing consistency if {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅ [Marin et al., 2011]
  • 116. Conclusion Choice of summary statistics is paramount for ABC validation/performance At best, ABC approximates π(. | η(y)) Model selection feasible with ABC [with caution!] For estimation, consistency if {θ; µ(θ) = µ0 } = θ0 For testing consistency if {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅ [Marin et al., 2011]
  • 117. Conclusion Choice of summary statistics is paramount for ABC validation/performance At best, ABC approximates π(. | η(y)) Model selection feasible with ABC [with caution!] For estimation, consistency if {θ; µ(θ) = µ0 } = θ0 For testing consistency if {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅ [Marin et al., 2011]
  • 118. ABC for model choice Introduction ABC ABC as an inference machine ABC for model choice BMC Principle Gibbs random fields (counterexample) Generic ABC model choice Model choice consistency ABCel
  • 119. Bayesian model choice BMC Principle Several models M1 , M2 , . . . are considered simultaneously for dataset y and model index M central to inference. Use of prior π(M = m), plus prior distribution on the parameter conditional on the value m of the model index, πm (θm )
  • 120. Bayesian model choice BMC Principle Several models M1 , M2 , . . . are considered simultaneously for dataset y and model index M central to inference. Goal is to derive the posterior distribution of M, π(M = m|data) a challenging computational target when models are complex.
  • 121. Generic ABC for model choice Algorithm 3 Likelihood-free model choice sampler (ABC-MC) for t = 1 to T do repeat Generate m from the prior π(M = m) Generate θm from the prior πm (θm ) Generate z from the model fm (z|θm ) until ρ{η(z), η(y)} < Set m(t) = m and θ(t) = θm end for [Grelaud & al., 2009; Toni & al., 2009]
  • 122. ABC estimates Posterior probability π(M = m|y) approximated by the frequency of acceptances from model m T 1 Im(t) =m . T t=1
  • 123. ABC estimates Posterior probability π(M = m|y) approximated by the frequency of acceptances from model m T 1 Im(t) =m . T t=1 Extension to a weighted polychotomous logistic regression estimate of π(M = m|y), with non-parametric kernel weights [Cornuet et al., DIYABC, 2009]
  • 124. Potts model Skip MRFs Potts model Distribution with an energy function of the form θS(y) = θ δyl =yi l∼i where l∼i denotes a neighbourhood structure In most realistic settings, summation Zθ = exp{θS(x)} x∈X involves too many terms to be manageable and numerical approximations cannot always be trusted
  • 125. Potts model Skip MRFs Potts model Distribution with an energy function of the form θS(y) = θ δyl =yi l∼i where l∼i denotes a neighbourhood structure In most realistic settings, summation Zθ = exp{θS(x)} x∈X involves too many terms to be manageable and numerical approximations cannot always be trusted
  • 126. Neighbourhood relations Setup Choice to be made between M neighbourhood relations m i ∼i (0 m M − 1) with Sm (x) = I{xi =xi } m i ∼i driven by the posterior probabilities of the models.
  • 127. Model index Computational target: P(M = m|x) ∝ fm (x|θm )πm (θm ) dθm π(M = m) Θm If S(x) sufficient statistic for the joint parameters (M, θ0 , . . . , θM−1 ), P(M = m|x) = P(M = m|S(x)) .
  • 128. Model index Computational target: P(M = m|x) ∝ fm (x|θm )πm (θm ) dθm π(M = m) Θm If S(x) sufficient statistic for the joint parameters (M, θ0 , . . . , θM−1 ), P(M = m|x) = P(M = m|S(x)) .
  • 129. Sufficient statistics in Gibbs random fields Each model m has its own sufficient statistic Sm (·) and S(·) = (S0 (·), . . . , SM−1 (·)) is also (model-)sufficient. Explanation: For Gibbs random fields, x|M = m ∼ fm (x|θm ) = fm (x|S(x))fm (S(x)|θm ) 1 2 1 = f 2 (S(x)|θm ) n(S(x)) m where n(S(x)) = {x ∈ X : S(x) = S(x)} ˜ ˜ c S(x) is sufficient for the joint parameters
  • 130. Sufficient statistics in Gibbs random fields Each model m has its own sufficient statistic Sm (·) and S(·) = (S0 (·), . . . , SM−1 (·)) is also (model-)sufficient. Explanation: For Gibbs random fields, x|M = m ∼ fm (x|θm ) = fm (x|S(x))fm (S(x)|θm ) 1 2 1 = f 2 (S(x)|θm ) n(S(x)) m where n(S(x)) = {x ∈ X : S(x) = S(x)} ˜ ˜ c S(x) is sufficient for the joint parameters
  • 131. Toy example iid Bernoulli model versus two-state first-order Markov chain, i.e. n f0 (x|θ0 ) = exp θ0 I{xi =1} {1 + exp(θ0 )}n , i=1 versus n 1 f1 (x|θ1 ) = exp θ1 I{xi =xi−1 } {1 + exp(θ1 )}n−1 , 2 i=2 with priors θ0 ∼ U(−5, 5) and θ1 ∼ U(0, 6) (inspired by “phase transition” boundaries).
  • 132. About sufficiency If η1 (x) sufficient statistic for model m = 1 and parameter θ1 and η2 (x) sufficient statistic for model m = 2 and parameter θ2 , (η1 (x), η2 (x)) is not always sufficient for (m, θm ) c Potential loss of information at the testing level
  • 133. About sufficiency If η1 (x) sufficient statistic for model m = 1 and parameter θ1 and η2 (x) sufficient statistic for model m = 2 and parameter θ2 , (η1 (x), η2 (x)) is not always sufficient for (m, θm ) c Potential loss of information at the testing level
  • 134. Poisson/geometric example Sample x = (x1 , . . . , xn ) from either a Poisson P(λ) or from a geometric G(p) Sum n S= xi = η(x) i=1 sufficient statistic for either model but not simultaneously
  • 135. Limiting behaviour of B12 (T → ∞) ABC approximation T t=1 Imt =1 Iρ{η(zt ),η(y)} B12 (y) = T , t=1 Imt =2 Iρ{η(zt ),η(y)} where the (mt , z t )’s are simulated from the (joint) prior As T goes to infinity, limit Iρ{η(z),η(y)} π1 (θ1 )f1 (z|θ1 ) dz dθ1 B12 (y) = Iρ{η(z),η(y)} π2 (θ2 )f2 (z|θ2 ) dz dθ2 Iρ{η,η(y)} π1 (θ1 )f1η (η|θ1 ) dη dθ1 = , Iρ{η,η(y)} π2 (θ2 )f2η (η|θ2 ) dη dθ2 where f1η (η|θ1 ) and f2η (η|θ2 ) distributions of η(z)
  • 136. Limiting behaviour of B12 (T → ∞) ABC approximation T t=1 Imt =1 Iρ{η(zt ),η(y)} B12 (y) = T , t=1 Imt =2 Iρ{η(zt ),η(y)} where the (mt , z t )’s are simulated from the (joint) prior As T goes to infinity, limit Iρ{η(z),η(y)} π1 (θ1 )f1 (z|θ1 ) dz dθ1 B12 (y) = Iρ{η(z),η(y)} π2 (θ2 )f2 (z|θ2 ) dz dθ2 Iρ{η,η(y)} π1 (θ1 )f1η (η|θ1 ) dη dθ1 = , Iρ{η,η(y)} π2 (θ2 )f2η (η|θ2 ) dη dθ2 where f1η (η|θ1 ) and f2η (η|θ2 ) distributions of η(z)
  • 137. Limiting behaviour of B12 ( → 0) When goes to zero, η π1 (θ1 )f1η (η(y)|θ1 ) dθ1 B12 (y) = π2 (θ2 )f2η (η(y)|θ2 ) dθ2 c Bayes factor based on the sole observation of η(y)
  • 138. Limiting behaviour of B12 ( → 0) When goes to zero, η π1 (θ1 )f1η (η(y)|θ1 ) dθ1 B12 (y) = π2 (θ2 )f2η (η(y)|θ2 ) dθ2 c Bayes factor based on the sole observation of η(y)
  • 139. Limiting behaviour of B12 (under sufficiency) If η(y) sufficient statistic in both models, fi (y|θi ) = gi (y)fi η (η(y)|θi ) Thus Θ1 π(θ1 )g1 (y)f1η (η(y)|θ1 ) dθ1 B12 (y) = Θ2 π(θ2 )g2 (y)f2η (η(y)|θ2 ) dθ2 g1 (y) π1 (θ1 )f1η (η(y)|θ1 ) dθ1 g1 (y) η = = B (y) . g2 (y) π2 (θ2 )f2η (η(y)|θ2 ) dθ2 g2 (y) 12 [Didelot, Everitt, Johansen & Lawson, 2011] c No discrepancy only when cross-model sufficiency
  • 140. Limiting behaviour of B12 (under sufficiency) If η(y) sufficient statistic in both models, fi (y|θi ) = gi (y)fi η (η(y)|θi ) Thus Θ1 π(θ1 )g1 (y)f1η (η(y)|θ1 ) dθ1 B12 (y) = Θ2 π(θ2 )g2 (y)f2η (η(y)|θ2 ) dθ2 g1 (y) π1 (θ1 )f1η (η(y)|θ1 ) dθ1 g1 (y) η = = B (y) . g2 (y) π2 (θ2 )f2η (η(y)|θ2 ) dθ2 g2 (y) 12 [Didelot, Everitt, Johansen & Lawson, 2011] c No discrepancy only when cross-model sufficiency
  • 141. Poisson/geometric example (back) Sample x = (x1 , . . . , xn ) from either a Poisson P(λ) or from a geometric G(p) Discrepancy ratio g1 (x) S!n−S / i xi ! = g2 (x) 1 n+S−1 S
  • 142. Poisson/geometric discrepancy η Range of B12 (x) versus B12 (x): The values produced have nothing in common.
  • 143. Formal recovery Creating an encompassing exponential family f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θT η1 (x) + θT η2 (x) + α1 t1 (x) + α2 t2 (x)} 1 2 leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x)) [Didelot, Everitt, Johansen & Lawson, 2011]
  • 144. Formal recovery Creating an encompassing exponential family f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θT η1 (x) + θT η2 (x) + α1 t1 (x) + α2 t2 (x)} 1 2 leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x)) [Didelot, Everitt, Johansen & Lawson, 2011] In the Poisson/geometric case, if i xi ! is added to S, no discrepancy
  • 145. Formal recovery Creating an encompassing exponential family f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θT η1 (x) + θT η2 (x) + α1 t1 (x) + α2 t2 (x)} 1 2 leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x)) [Didelot, Everitt, Johansen & Lawson, 2011] Only applies in genuine sufficiency settings... c Inability to evaluate information loss due to summary statistics
  • 146. Meaning of the ABC-Bayes factor The ABC approximation to the Bayes Factor is based solely on the summary statistics.... In the Poisson/geometric case, if E[yi ] = θ0 > 0, η (θ0 + 1)2 −θ0 lim B12 (y) = e n→∞ θ0
  • 147. Meaning of the ABC-Bayes factor The ABC approximation to the Bayes Factor is based solely on the summary statistics.... In the Poisson/geometric case, if E[yi ] = θ0 > 0, η (θ0 + 1)2 −θ0 lim B12 (y) = e n→∞ θ0
  • 148. MA example 1.0 1.0 1.0 1.0 0.8 0.8 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.4 0.4 0.2 0.2 0.2 0.2 0.0 0.0 0.0 0.0 1 2 1 2 1 2 1 2 Evolution [against ] of ABC Bayes factor, in terms of frequencies of visits to models MA(1) (left) and MA(2) (right) when equal to 10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sample of 50 points from a MA(2) with θ1 = 0.6, θ2 = 0.2. True Bayes factor equal to 17.71.
  • 149. MA example 1.0 1.0 1.0 1.0 0.8 0.8 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.4 0.4 0.2 0.2 0.2 0.2 0.0 0.0 0.0 0.0 1 2 1 2 1 2 1 2 Evolution [against ] of ABC Bayes factor, in terms of frequencies of visits to models MA(1) (left) and MA(2) (right) when equal to 10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sample of 50 points from a MA(1) model with θ1 = 0.6. True Bayes factor B21 equal to .004.
  • 150. The only safe cases??? [circa April 2011] Besides specific models like Gibbs random fields, using distances over the data itself escapes the discrepancy... [Toni & Stumpf, 2010; Sousa & al., 2009] ...and so does the use of more informal model fitting measures [Ratmann & al., 2009] ...or use another type of approximation like empirical likelihood [Mengersen et al., 2012, see Kerrie’s ASC 2012 talk]
  • 151. The only safe cases??? [circa April 2011] Besides specific models like Gibbs random fields, using distances over the data itself escapes the discrepancy... [Toni & Stumpf, 2010; Sousa & al., 2009] ...and so does the use of more informal model fitting measures [Ratmann & al., 2009] ...or use another type of approximation like empirical likelihood [Mengersen et al., 2012, see Kerrie’s ASC 2012 talk]
  • 152. The only safe cases??? [circa April 2011] Besides specific models like Gibbs random fields, using distances over the data itself escapes the discrepancy... [Toni & Stumpf, 2010; Sousa & al., 2009] ...and so does the use of more informal model fitting measures [Ratmann & al., 2009] ...or use another type of approximation like empirical likelihood [Mengersen et al., 2012, see Kerrie’s ASC 2012 talk]
  • 153. ABC model choice consistency Introduction ABC ABC as an inference machine ABC for model choice Model choice consistency Formalised framework Consistency results Summary statistics ABCel
  • 154. The starting point Central question to the validation of ABC for model choice: When is a Bayes factor based on an insufficient statistic T(y) consistent? T Note/warnin: c drawn on T(y) through B12 (y) necessarily differs from c drawn on y through B12 (y) [Marin, Pillai, X, & Rousseau, arXiv, 2012]
  • 155. The starting point Central question to the validation of ABC for model choice: When is a Bayes factor based on an insufficient statistic T(y) consistent? T Note/warnin: c drawn on T(y) through B12 (y) necessarily differs from c drawn on y through B12 (y) [Marin, Pillai, X, & Rousseau, arXiv, 2012]
  • 156. A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1 : y ∼ N(θ1 , 1) opposed √ to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2 √ and scale parameter 1/ 2 (variance one).
  • 157. A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1 : y ∼ N(θ1 , 1) opposed √ to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2 √ and scale parameter 1/ 2 (variance one). Four possible statistics 1. sample mean y (sufficient for M1 if not M2 ); 2. sample median med(y) (insufficient); 3. sample variance var(y) (ancillary); 4. median absolute deviation mad(y) = med(|y − med(y)|);
  • 158. A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1 : y ∼ N(θ1 , 1) opposed √ to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2 √ and scale parameter 1/ 2 (variance one). n=100 0.7 0.6 0.5 0.4 q 0.3 q q q 0.2 0.1 q q q q q 0.0 q q Gauss Laplace
  • 159. A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1 : y ∼ N(θ1 , 1) opposed √ to model M2 : y ∼ L(θ2 , 1/ 2), Laplace distribution with mean θ2 √ and scale parameter 1/ 2 (variance one). n=100 n=100 1.0 0.7 q q 0.6 q 0.8 q q 0.5 q q q 0.6 q 0.4 q q q q 0.3 q q 0.4 q q 0.2 0.2 q 0.1 q q q q q q q 0.0 0.0 q q Gauss Laplace Gauss Laplace
  • 160. Framework Starting from sample y = (y1 , . . . , yn ) the observed sample, not necessarily iid with true distribution y ∼ Pn Summary statistics T(y) = Tn = (T1 (y), T2 (y), · · · , Td (y)) ∈ Rd with true distribution Tn ∼ Gn .