SlideShare une entreprise Scribd logo
1  sur  36
Télécharger pour lire hors ligne
On The Foundations Of Statistical Inference

            by ALLAN BIRNBAUM

                  LI Chenlu


                 2013.01.22




                                              1 / 33
content


   1   Introduction




                      2 / 33
content


   1   Introduction
   2   Part1
          Statistical Evidence
          The Principle of Sufficiency
          The Principle of Conditionality
          The Likelihood Principle




                                            2 / 33
content


   1   Introduction
   2   Part1
          Statistical Evidence
          The Principle of Sufficiency
          The Principle of Conditionality
          The Likelihood Principle
   3   Part2
          Binary Experiments
          Finite Parameter Spaces
          More General Parameter Spaces
          Bayesian Methods: An Interpretation of the Principle of Insuffi-
          cient Reason




                                                                           2 / 33
content


   1   Introduction
   2   Part1
          Statistical Evidence
          The Principle of Sufficiency
          The Principle of Conditionality
          The Likelihood Principle
   3   Part2
          Binary Experiments
          Finite Parameter Spaces
          More General Parameter Spaces
          Bayesian Methods: An Interpretation of the Principle of Insuffi-
          cient Reason
   4   Conclusion


                                                                           2 / 33
Introduction



  The paper studies the likelihood principle(LP)and how the
  likelihood function can be used to mesure the evidence in the data
  about an unkown parameter.
  • The main aim of the paper is to show and discuss the implication
  of the fact that the LP is a consequence of the concepts of
  conditional frames of the reference and sufficiency.
  • The second aim of the paper is to describe how and why these
  principles are appropriate ways to characterize statistical evidence
  in parametric models for inference purposes.




                                                                         3 / 33
Part1




   1    Statistical Evidence
   2    The Principle of Sufficiency
   3    The Principle of Conditionality
   4    The Likelihood Principle




                                          4 / 33
Statistical Evidence



  An experiment E is defined as E={Ω,S,f(x,θ)},where f is a
  density,θ is the unknown parameter,Ω is the parameter space and S
  the sample space of outcomes x of E.The likelihood function
  determined by an observed outcome is Lx =f(x,θ)

  Birnbaum states that the central purpose of the paper is to clarify
  the essential structure and properites of statistical evidence,termed
  the evidential meaning of (E,x) and denoted by Ev(E,x),in various
  instances.
           Ev(E,x)is the evidence about θ supplied by x and E




                                                                          5 / 33
The Principle of Sufficiency



  The Principle of Sufficiency (S)
  Let E be any experiment,with sample space{x},and let t(x)be any
  sufficient statistic. Let E denote the derived experiment,having
  the same parameter space,such that when any outcome x of E is
  observed the corresponding outcome t = t(x) of E is observe.Then
  for each x,Ev (E , x) = Ev (E , t),where t = t(x)

  ∗ If t(x) is a sufficient statistic for θ, then any inference about θ
  should depend on the sample x only through the value t(x)




                                                                        6 / 33
The Principle of Sufficiency




     If x is any specified outcome of any specified experiment E,the
     likelihood function determined by x is the function of θ:cf(x,θ),where
     c is any positive constant value
     If for some positive constant c we have f (x, θ) = cg (y , θ),for
     all θ,x and y are said to determine the same likelihood function
     If two outcomes x,x of one experiment determine the same
     likelihood function,f(x,θ)=cf(x ,θ) for all θ ,then there exists a
     sufficient statistic t such that t(x) = t(x )




                                                                          7 / 33
The Principle of Sufficiency




  lemma1
  if two outcomes x, x of any experiment E determine the same
  likelihood function,then they have the same evidential meaning:
  Ev (E , x) = Ev (E , x )




                                                                    8 / 33
The Principle of Conditionality


  the definiton of the mixture experiment
  An experiment E is called a mixture ,with componens{Eh },if it is
  mathematically equivalent to a two-stage experiment of the follow-
  ing form:

    1   An observation h is taken on a random variable H having a
        fixed and know distribution G (G does not depend on unknow
        parameter values.)
    2   The corresponding component experiment Eh is carried out
        ,yielding an outcomes xh
  Thus each outcomes of E is a pair(Eh ,xh )



                                                                       9 / 33
The Principle of Conditionality



  The Principle of Conditionality (C)
  If an experiment E is a mixture G of components{Eh },with possible
  outcomes(Eh ,xh ),then

                    Ev (E , (Eh , xh )) = Ev (Eh , xh )

    That is,the evidential meaning of any outcome(Eh ,xh )of any ex-
  periment E having a mixture structure is the same as: the eviden-
  tial meaning of the corresponding outcome xh of the corresponding
  component experiment Eh ,ignoring otherwise the over-all structure
  of the original experiment E




                                                                       10 / 33
The Principle of Conditionality


  Exemple
  suppose that two instruments(h=1 or 2) are available for use in
  an experment,respectives probabilities p1 =0.73,p2 =0.27 of being s-
  elected for use. each instrument gives the observations y=1,or y=0.

  Consider the assertion :Ev(E,(E1 ,1))=Ev(E1 ,1),by accepting the
  experimental conditions,suppose that E leads to selection of the
  first instrument(h=1).
  .
  In the hypothetical situation,it would be prepared to report
  either(E1 ,0)or(E1 ,1)as a complete description of the statistical
  evidence obtained.



                                                                         11 / 33
The Principle of Conditionality



  Exemple

  For purpose of informative inference ,if y=1 is observed with the
  first instument,then the report (E1 ,1) seems to be an appropriate
  and complete description of the statistical evidence obtained.
  and the”more complete” report(E,(E1 ,1))seems to differ from it
  only by the addition of recognizably redundant elements irrelevant
  to the evidential meaning and evidential interpretation of this
  outcomes of E.




                                                                       12 / 33
The Likelihood Principle



  The Likelihood Principle (L)
  If E and E are any two experiments with a common parameter
  space,and if x and y any respective outcomes which determine
  likelihood functions satisfying f (x, θ) = cg (y , θ) for some positive
  constant c=c(x,y) and all θ,thenEv (E , x) = Ev (E , y )

     That is ,the evidential meaning Ev(E,x)of any outcome x
  of any experiment E is characterized completely by the likelihood
  function cf(x,θ),and is otherwise independent of the structure of
  (E,x)




                                                                            13 / 33
The Likelihood Principle



  Lemma2
  (S)and(C)⇐⇒(L)

  Prove⇐:
  •That(L)implies(C)follows immediately from the fact that in all cas-
  es the likelihood functions determined respectively by (E,(Eh ,xh ))and
  (Eh ,xh )are proportional.
  •That(L)implies(S)follows immediately from Lemma 1.




                                                                            14 / 33
The Likelihood Principle
  Lemma2
  (S)and(C)⇐⇒(L)

  Prove⇒:
  Let E and E denote any two experiments,having the same parameter spaceΩ={θ},and
  represented by probability density functions f(x,θ),g(y,θ)on their respective sample
  spaces S={x},S ={y}.consider the mixture experiment E whose components are just
  E and E ,taken with equal probabilities.let z denote the sample point of E ,and let C
  denote any set of points z;then C=A B,where A⊂ S and B⊂ S
              1              1
  Prob(Z∈|θ)= 2 Prob(A|θ,E)+ 2 Prob(B|θ,E )
  the probability density function representing E be denoted by:
                                  
                                      1
                                  
                                      2
                                        f (x, θ)    if z=x∈ S,
                             h=
                                      1
                                  
                                      2
                                        g (y , θ)   if z=y∈ S

  From(C),it follows that:Ev(E ,(E,x))=Ev(E,x), for each x∈ S
  .                       Ev(E ,(E ,y))=Ev(E ,y),for each y∈ S                  (a)
                                                                                          15 / 33
The Likelihood Principle


  Prove⇒:
  Let x y be any two outcomes of E,E respectively which determine the same likelihood
  function :   f(x,θ)=cg(y,θ) for all θ.
  where c is some positive constant.Then we have h(x,θ)=ch(y,θ) for all θ,
  the two outcomes(E,x),(E ,y)of E determine the same likelihood function.Then it fol-
  lows from(S)and Lemma1: Ev(E ,(E,x))=Ev(E ,(E ,y))                    (b)
  from(a)and(b)it follows that:

                                  Ev(E,x)=Ev(E ,y).

    The consequence states that any two outcomes x,y of any two experiments E,E (with
  the same parameter space)have the same evidential meaning if they determine the same
  likelihood function.




                                                                                         16 / 33
The Likelihood Principle

  impact of the principle
  •The implication⇒ is the most important part of the equiva-
  lence,because this means that if you do not accept(L),you have to
  discard either(S)or(C),two widely accepted principles.

  •The most important consequence of (L) seems to be that
  evidential measures based on a specific experimental frame of refer-
  ece(like p-values and confidence levels) are somewhat unsatisfactory.

  • In other words, (L) eliminates the need to consider the
  sample space or any part of it once the data are observed.Lemma
  2 truly was a ”breakthrough” in the foundations of statistical
  inference and made (L) stand on its own ground,independent of a
  Bayesian argument.


                                                                         17 / 33
Part2




   1    Binary Experiments
   2    Finite Parameter Spaces
   3    More General Parameter Spaces
   4    Bayesian Methods




                                        18 / 33
Binary Experiments



  let Ω=(θ1 ,θ2 ).In this case,(L)means that all information lie in the
  likelihood ratio, λ(x)=f(x,θ2 )/f(x,θ1 ).
  The question is now what evidential meaning we can attach to the
  numberλ(x)?
    To answer this,Birnbaum first considers a binary experiment in
  which the sample space has only two points.denoted(+)and(-),and
                                              1
  such that p(+|θ1 )=p(-|θ2 )=α for an α ≤ 2 . Such an experiment
  is called a symmetric simple binary experiment and is characterized
  by the”error” probability α.




                                                                          19 / 33
Binary Experiments



   For such an experiment,λ(+)=(1-α)/α ≥ 1 ,α=1/(1+λ(+))
  andλ(-)= α/(1-α) ≤ 1.The important point now is that according
  to (L),two experiments with the same value of λ have the same
  evidential meaning about the value of α .
  Therefore,the evidential meaning of λ(x)≥1 from any binary
  experiment E is the same as the evidential meaning of the
  (+)outcome from a symmetric simple binary experiment with
  α(x)=1/(1+λ(x)). α(x)is called the intrinsic signigicance level and
  is measure of evidence that satisfies(L).




                                                                        20 / 33
Finite Parameter Spaces

  If E is any experiment with a parameter space containing only a
  finite number k of points,θ=i=1,2...k. Any observed outcome x of
  E determines a likelihood function L(i)=cf(x,i),i=1,....k. We can
  assume that k L(i)=1
                  i=1
  Any experiment E with a finite sample space j=1,...m, and finite
  parameter space is represented by a stochastic matrix
                                                  
                                   p11 . . . p1m
                    E = (Pij ) =  .     ..    . 
                                  .           . 
                                    .       .  .
                                                pk1 . . . pkm
            m
   where    j=1 Pij =1   and pij =Prob[j|i],for each i,j.Here the i th row is the discrete
  probability distribution pij given by parameter value i,and the j th column is
  proportional to the likelihooh function L(i)=L(i|j)=cpij ,i=1,...k,determined by
  outcome j.



                                                                                             21 / 33
Finite Parameter Spaces:Qualitative evidential interpretation

   Exemple1:
   experiment with only two points j=1,2.
   we can define Prob[j=1|i]=L(i) and Prob[j=2|i]=1-L(i).For i=1,...k.
   for exemple,the likelihood function L(i)= 1 ,i=1,2,3 represents the
                                             3
   possible outcome j=1 of the experiment
                                  1 2 
                                               3   3
                                                      
                                               1   2
                                                      
                                    E=
                                              3   3
                                                       
                                                       
                                                      
                                               1   2
                                               3   3

   •Since this experiment gives the same distribution on the two-point sample space under
   esch hypothesis,it is completely uninformative.According to the likelihood principle,we
   can therefore conclude that the given likelihood function has a simple evidential inter-
   pretation,regardless of the structure of the experiment,that is represents a completely
   uninformative outcome.
                                                                                              22 / 33
Finite Parameter Spaces:Qualitative evidential interpretation

   Exemple2:
   The likelihood function( 1 , 1 ,0)(that is ,L(1)=L(2)= 1 ,L(3)=0,on the
                            2 2                           2
   3-points parameter spacei=1,2,3.)
   this represents the possible outcome j=1 of the experiment
                                      1 1 
                                              2    2
                                                      
                                              1    1
                                                      
                                    E=
                                             2    2
                                                       
                                                       
                                                      
                                              0    1

   •this outcome of E is impossible under i=3,and hence supports without risk of error the
   conclusion that i=3
   • E prescribes identical distributions under i=1,and 2.and hence the experiment E ,and
   each of its possible outcomes ,is completely uninformative as between i=1,and 2.


                                                                                             23 / 33
Finite Parameter Spaces:Qualitative evidential interpretation

   Exemple 3:some likelihood functions on a given parameter space can
   be compared and ordered in a natural way
   Consider the likelihood functions (0.8,0.1,0.1) and (0.45,0.275,0.275)
   The interpretation that the first is more informative than the second is supported
   as follows:                                      
                                        0.8   0.2
                               E =  0.1      0.9  = (Pij )
                                                 

                                        0.1   0.9

   when outcome j=2 of E is observed,we report w=1 with probability 1 ,w=2 with
                                                                    2
   probability 1 .when outcome j=1 of E is observed, the report w=1 is given.
               2

                                                    
                                        0.9    0.1
                                                  
                              E =  0.55
                                             0.45  = (Piw )
                                                   
                                    0.55      0.45


   The experiment E is less informative than E

                                                                                       24 / 33
Finite Parameter Spaces:Intrinsic confidence methods

   Exemple 4
   consider the likelihood function(0.9,0.09,0.01)defined on the param-
   eter space i=1,2,3.This represents the possible outcome j=1 of the
   experiment.
                                             
                             0.9 0.01 0.09
                                             
                                             
                    E =  0.09 0.9 0.01  = (Pij )
                                             
                                             
                            0.01 0.09 0.9

   •In this experiment,a confidence set estimator of the parameter i is
   given by taking,for each possible outcomes j,the two values of i
   having greatest likelihoods L(i | j).
   •we can verify that under each value of i, the probability is 0.99
   that the confidence sets determined in this way have confidence
   coefficient 0.99.
                                                                         25 / 33
Finite Parameter Spaces:Intrinsic confidence methods

   The general form of the intrinsic confidence methods
   for any likelihood function L(i) defined on a finite parameter space
   i=1,...k,and such that k L(i)=1
                             i=1
   if there is a unique least likely value i1 of i,let c1 =1-L(i1 ).Then the remaining
   (k-1)parameter points will be called an intrinsic confidence set with intrinsic confidence
   coefficient c1 ;If there is a pair of values of i,say i1 ,i2 ,with likelihoods strictly smaller
   than those of the remainning (k-2) points,call the latter set of points an intrinsic
   confidence set,with intrinsic confidence level c2 =1-L(i1 )-L(i2 ).and so on.

                                                                   
                               L(1)      L(k)      L(k − 1) . . .
                                                                   
                           L(2)         L(1)        L(k) . . .     
                                                                   
                                                                   
                      E =  L(3)         L(2)        L(1) . . .      = (Pij )
                                                                   
                          
                             .
                              .            .
                                           .            .
                                                        .
                                                                    
                                                                    
                          
                             .            .            .           
                                                                    
                            L(k)       L(k − 1)    L(k − 2) . . .



                                                                                                   26 / 33
Finite Parameter Spaces



  on the finite parameter space:
  •For finite parameter spaces,significance levels, confidence sets,and
  confidence levels can be based on the observed Lx (θ),hence
  satisfying(L),defined as regular such methods and concepts for a
  constructed experiment with a likelihood function identical to Lx (θ).

  •Therefore,in the case of finite parameter spaces,a clear and
  logical evidential interpretation of the likelihood function can be
  given through intrinsic methods and concepts.




                                                                           27 / 33
More General Parameter Spaces



  • This section deals mainly with the case where Ω is the real line.
  Given E,x,and Lx (θ),a hypothetical experiment E consisting of a
  single observation of Y with density g(y,θ)=cLx (θ-y)is then
  constructed.
  • Then(E,x)has the same likelihood function as (E ,0),and(L)
  implies that the same inference should be used in (E,x)as in (E ,0).
  For exemple,if a regular(1-α) confidence interval in E is used, then
  this interval estimate(for y=0)should be the one used also for (E,x)
  and is called a (1-α) intrinsic confidence interval for (E,x).




                                                                         28 / 33
More General Parameter Spaces




  As a general comment,Birnbaum emphasizes that intrinsic methods
  and concepts can ,in light of (L),be nothing more than methods of
  expressing evidential meaning already implicit in Lx (θ)itself.
  In the discussion,Birnbaum does not recommend intrinsic methods
  as statistical methods in practice.The value of these methods is
  conceptual,and the main use of intrinsic concepts is to show that
  likelihood functions as such are evidentially meaningful.




                                                                      29 / 33
Bayesian Methods:An Interpretation of the principle of
Insufficient Reason




  Birnbaum views the Bayes approach as not directed to informative
  inference,but rather as a way to determine an appropriate final
  synthesis of availabe,information based on prior availbale
  information and data.It is observed that in determining the postrior
  distribution ,the contribution of the data and E is L x (θ)only,so the
  Bayes approach implies(L).




                                                                           30 / 33
Conclusion


  •Birnbaum’s main result,that LP follows from sufficiency and
  conditionality principles that most statisticians accept,must be
  regarded as one of the deepest theorems of theoretical statistics,
  yet the proof is unbelievably simple.
  •The result had a decisive influence on how many statisticians
  came to view the likelihood function as a basic quantity in
  statistical analysis.
  •It has also affected in a general way how we view the science of
  statistics.Birnbaum introduced principles of equivalence within and
  between experiments, showing various relationships between these
  principles.This made it possible to disccuss the different concepts
  from alternative viewpoints.



                                                                        31 / 33
References



     Allan Birnbaum ,”On the foundations of statistical inference:binary
     experiment” Institute of Mathematical Sciences,New York Uni-
     versity
     Daniel Steel ,”Beyasian confirmation theory and the likelihood
     principle” Michigan State University
     Royall,R” Statistical Evidence:A likelihood paradigm” Chapman
     and Hall,London
     Jan F Bjφrnstad,” Breakthroughs in Statistics Volume I -foundationa
     ans Basic Theory” The university of Trondheim




                                                                      32 / 33
the end!thank you!




                     33 / 33

Contenu connexe

Tendances

Function an old french mathematician said[1]12
Function an old french mathematician said[1]12Function an old french mathematician said[1]12
Function an old french mathematician said[1]12Mark Hilbert
 
Common random fixed point theorems of contractions in
Common random fixed point theorems of contractions inCommon random fixed point theorems of contractions in
Common random fixed point theorems of contractions inAlexander Decker
 
A semi analytic method for solving two-dimensional fractional dispersion equa...
A semi analytic method for solving two-dimensional fractional dispersion equa...A semi analytic method for solving two-dimensional fractional dispersion equa...
A semi analytic method for solving two-dimensional fractional dispersion equa...Alexander Decker
 
Introduction to comp.physics ch 3.pdf
Introduction to comp.physics ch 3.pdfIntroduction to comp.physics ch 3.pdf
Introduction to comp.physics ch 3.pdfJifarRaya
 
Intro. to computational Physics ch2.pdf
Intro. to computational Physics ch2.pdfIntro. to computational Physics ch2.pdf
Intro. to computational Physics ch2.pdfJifarRaya
 
Partitioning procedures for solving mixed-variables programming problems
Partitioning procedures for solving mixed-variables programming problemsPartitioning procedures for solving mixed-variables programming problems
Partitioning procedures for solving mixed-variables programming problemsSSA KPI
 
Week8 livelecture2010 follow_up
Week8 livelecture2010 follow_upWeek8 livelecture2010 follow_up
Week8 livelecture2010 follow_upBrent Heard
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...Christian Robert
 
Common Fixed Theorems Using Random Implicit Iterative Schemes
Common Fixed Theorems Using Random Implicit Iterative SchemesCommon Fixed Theorems Using Random Implicit Iterative Schemes
Common Fixed Theorems Using Random Implicit Iterative Schemesinventy
 

Tendances (19)

Function an old french mathematician said[1]12
Function an old french mathematician said[1]12Function an old french mathematician said[1]12
Function an old french mathematician said[1]12
 
Congress
Congress Congress
Congress
 
L25052056
L25052056L25052056
L25052056
 
Common random fixed point theorems of contractions in
Common random fixed point theorems of contractions inCommon random fixed point theorems of contractions in
Common random fixed point theorems of contractions in
 
A semi analytic method for solving two-dimensional fractional dispersion equa...
A semi analytic method for solving two-dimensional fractional dispersion equa...A semi analytic method for solving two-dimensional fractional dispersion equa...
A semi analytic method for solving two-dimensional fractional dispersion equa...
 
Introduction to comp.physics ch 3.pdf
Introduction to comp.physics ch 3.pdfIntroduction to comp.physics ch 3.pdf
Introduction to comp.physics ch 3.pdf
 
Blackbox task 2
Blackbox task 2Blackbox task 2
Blackbox task 2
 
Math task 3
Math task 3Math task 3
Math task 3
 
Intro. to computational Physics ch2.pdf
Intro. to computational Physics ch2.pdfIntro. to computational Physics ch2.pdf
Intro. to computational Physics ch2.pdf
 
Task 4
Task 4Task 4
Task 4
 
Partitioning procedures for solving mixed-variables programming problems
Partitioning procedures for solving mixed-variables programming problemsPartitioning procedures for solving mixed-variables programming problems
Partitioning procedures for solving mixed-variables programming problems
 
Presentation aust final
Presentation aust finalPresentation aust final
Presentation aust final
 
B02609013
B02609013B02609013
B02609013
 
Numerical method (curve fitting)
Numerical method (curve fitting)Numerical method (curve fitting)
Numerical method (curve fitting)
 
Week8 livelecture2010 follow_up
Week8 livelecture2010 follow_upWeek8 livelecture2010 follow_up
Week8 livelecture2010 follow_up
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
Chapter 2 roots of equations
Chapter 2 roots of equationsChapter 2 roots of equations
Chapter 2 roots of equations
 
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
 
Common Fixed Theorems Using Random Implicit Iterative Schemes
Common Fixed Theorems Using Random Implicit Iterative SchemesCommon Fixed Theorems Using Random Implicit Iterative Schemes
Common Fixed Theorems Using Random Implicit Iterative Schemes
 

En vedette

Predicting Delinquency-Give me some credit
Predicting Delinquency-Give me some creditPredicting Delinquency-Give me some credit
Predicting Delinquency-Give me some creditpragativbora
 
Kaggle "Give me some credit" challenge overview
Kaggle "Give me some credit" challenge overviewKaggle "Give me some credit" challenge overview
Kaggle "Give me some credit" challenge overviewAdam Pah
 
Species sampling models in Bayesian Nonparametrics
Species sampling models in Bayesian NonparametricsSpecies sampling models in Bayesian Nonparametrics
Species sampling models in Bayesian NonparametricsJulyan Arbel
 
Bayesian Nonparametrics, Applications to biology, ecology, and marketing
Bayesian Nonparametrics, Applications to biology, ecology, and marketingBayesian Nonparametrics, Applications to biology, ecology, and marketing
Bayesian Nonparametrics, Applications to biology, ecology, and marketingJulyan Arbel
 
Asymptotics for discrete random measures
Asymptotics for discrete random measuresAsymptotics for discrete random measures
Asymptotics for discrete random measuresJulyan Arbel
 
Presentation of Bassoum Abou on Stein's 1981 AoS paper
Presentation of Bassoum Abou on Stein's 1981 AoS paperPresentation of Bassoum Abou on Stein's 1981 AoS paper
Presentation of Bassoum Abou on Stein's 1981 AoS paperChristian Robert
 
Dependent processes in Bayesian Nonparametrics
Dependent processes in Bayesian NonparametricsDependent processes in Bayesian Nonparametrics
Dependent processes in Bayesian NonparametricsJulyan Arbel
 
A Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian NonparametricsA Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian NonparametricsJulyan Arbel
 
A Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian NonparametricsA Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian NonparametricsJulyan Arbel
 
Gelfand and Smith (1990), read by
Gelfand and Smith (1990), read byGelfand and Smith (1990), read by
Gelfand and Smith (1990), read byChristian Robert
 
Severe Testing: The Key to Error Correction
Severe Testing: The Key to Error CorrectionSevere Testing: The Key to Error Correction
Severe Testing: The Key to Error Correctionjemille6
 
Testing point null hypothesis, a discussion by Amira Mziou
Testing point null hypothesis, a discussion by Amira MziouTesting point null hypothesis, a discussion by Amira Mziou
Testing point null hypothesis, a discussion by Amira MziouChristian Robert
 
Reading Efron's 1979 paper on bootstrap
Reading Efron's 1979 paper on bootstrapReading Efron's 1979 paper on bootstrap
Reading Efron's 1979 paper on bootstrapChristian Robert
 
Reading the Lasso 1996 paper by Robert Tibshirani
Reading the Lasso 1996 paper by Robert TibshiraniReading the Lasso 1996 paper by Robert Tibshirani
Reading the Lasso 1996 paper by Robert TibshiraniChristian Robert
 

En vedette (20)

Predicting Delinquency-Give me some credit
Predicting Delinquency-Give me some creditPredicting Delinquency-Give me some credit
Predicting Delinquency-Give me some credit
 
ISBA 2016: Foundations
ISBA 2016: FoundationsISBA 2016: Foundations
ISBA 2016: Foundations
 
ABC workshop: 17w5025
ABC workshop: 17w5025ABC workshop: 17w5025
ABC workshop: 17w5025
 
Kaggle "Give me some credit" challenge overview
Kaggle "Give me some credit" challenge overviewKaggle "Give me some credit" challenge overview
Kaggle "Give me some credit" challenge overview
 
Species sampling models in Bayesian Nonparametrics
Species sampling models in Bayesian NonparametricsSpecies sampling models in Bayesian Nonparametrics
Species sampling models in Bayesian Nonparametrics
 
Bayesian Nonparametrics, Applications to biology, ecology, and marketing
Bayesian Nonparametrics, Applications to biology, ecology, and marketingBayesian Nonparametrics, Applications to biology, ecology, and marketing
Bayesian Nonparametrics, Applications to biology, ecology, and marketing
 
Asymptotics for discrete random measures
Asymptotics for discrete random measuresAsymptotics for discrete random measures
Asymptotics for discrete random measures
 
Presentation of Bassoum Abou on Stein's 1981 AoS paper
Presentation of Bassoum Abou on Stein's 1981 AoS paperPresentation of Bassoum Abou on Stein's 1981 AoS paper
Presentation of Bassoum Abou on Stein's 1981 AoS paper
 
Nested sampling
Nested samplingNested sampling
Nested sampling
 
Dependent processes in Bayesian Nonparametrics
Dependent processes in Bayesian NonparametricsDependent processes in Bayesian Nonparametrics
Dependent processes in Bayesian Nonparametrics
 
A Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian NonparametricsA Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian Nonparametrics
 
Bayesian Classics
Bayesian ClassicsBayesian Classics
Bayesian Classics
 
A Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian NonparametricsA Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian Nonparametrics
 
Gelfand and Smith (1990), read by
Gelfand and Smith (1990), read byGelfand and Smith (1990), read by
Gelfand and Smith (1990), read by
 
Severe Testing: The Key to Error Correction
Severe Testing: The Key to Error CorrectionSevere Testing: The Key to Error Correction
Severe Testing: The Key to Error Correction
 
Reading Neyman's 1933
Reading Neyman's 1933 Reading Neyman's 1933
Reading Neyman's 1933
 
Testing point null hypothesis, a discussion by Amira Mziou
Testing point null hypothesis, a discussion by Amira MziouTesting point null hypothesis, a discussion by Amira Mziou
Testing point null hypothesis, a discussion by Amira Mziou
 
Reading Efron's 1979 paper on bootstrap
Reading Efron's 1979 paper on bootstrapReading Efron's 1979 paper on bootstrap
Reading Efron's 1979 paper on bootstrap
 
Reading the Lasso 1996 paper by Robert Tibshirani
Reading the Lasso 1996 paper by Robert TibshiraniReading the Lasso 1996 paper by Robert Tibshirani
Reading the Lasso 1996 paper by Robert Tibshirani
 
slides Céline Beji
slides Céline Bejislides Céline Beji
slides Céline Beji
 

Similaire à Reading Birnbaum's (1962) paper, by Li Chenlu

Finite mixture model with EM algorithm
Finite mixture model with EM algorithmFinite mixture model with EM algorithm
Finite mixture model with EM algorithmLoc Nguyen
 
Mayo aug1, jsm slides (3)
Mayo aug1, jsm slides (3)Mayo aug1, jsm slides (3)
Mayo aug1, jsm slides (3)jemille6
 
Data mining assignment 2
Data mining assignment 2Data mining assignment 2
Data mining assignment 2BarryK88
 
"reflections on the probability space induced by moment conditions with impli...
"reflections on the probability space induced by moment conditions with impli..."reflections on the probability space induced by moment conditions with impli...
"reflections on the probability space induced by moment conditions with impli...Christian Robert
 
Learning dyadic data and predicting unaccomplished co-occurrent values by mix...
Learning dyadic data and predicting unaccomplished co-occurrent values by mix...Learning dyadic data and predicting unaccomplished co-occurrent values by mix...
Learning dyadic data and predicting unaccomplished co-occurrent values by mix...Loc Nguyen
 
Machine learning (9)
Machine learning (9)Machine learning (9)
Machine learning (9)NYversity
 
Expectation of Discrete Random Variable.ppt
Expectation of Discrete Random Variable.pptExpectation of Discrete Random Variable.ppt
Expectation of Discrete Random Variable.pptAlyasarJabbarli
 
Handling missing data with expectation maximization algorithm
Handling missing data with expectation maximization algorithmHandling missing data with expectation maximization algorithm
Handling missing data with expectation maximization algorithmLoc Nguyen
 
Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)
Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)
Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)jemille6
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
 
Probability/Statistics Lecture Notes 4: Hypothesis Testing
Probability/Statistics Lecture Notes 4: Hypothesis TestingProbability/Statistics Lecture Notes 4: Hypothesis Testing
Probability/Statistics Lecture Notes 4: Hypothesis Testingjemille6
 
1616 probability-the foundation of probability theory
1616 probability-the foundation of probability theory1616 probability-the foundation of probability theory
1616 probability-the foundation of probability theoryDr Fereidoun Dejahang
 
Cs229 notes8
Cs229 notes8Cs229 notes8
Cs229 notes8VuTran231
 

Similaire à Reading Birnbaum's (1962) paper, by Li Chenlu (20)

Finite mixture model with EM algorithm
Finite mixture model with EM algorithmFinite mixture model with EM algorithm
Finite mixture model with EM algorithm
 
Mayo aug1, jsm slides (3)
Mayo aug1, jsm slides (3)Mayo aug1, jsm slides (3)
Mayo aug1, jsm slides (3)
 
Data mining assignment 2
Data mining assignment 2Data mining assignment 2
Data mining assignment 2
 
Eliminiation by aspescts
Eliminiation by aspesctsEliminiation by aspescts
Eliminiation by aspescts
 
"reflections on the probability space induced by moment conditions with impli...
"reflections on the probability space induced by moment conditions with impli..."reflections on the probability space induced by moment conditions with impli...
"reflections on the probability space induced by moment conditions with impli...
 
Learning dyadic data and predicting unaccomplished co-occurrent values by mix...
Learning dyadic data and predicting unaccomplished co-occurrent values by mix...Learning dyadic data and predicting unaccomplished co-occurrent values by mix...
Learning dyadic data and predicting unaccomplished co-occurrent values by mix...
 
Machine learning (9)
Machine learning (9)Machine learning (9)
Machine learning (9)
 
Expectation of Discrete Random Variable.ppt
Expectation of Discrete Random Variable.pptExpectation of Discrete Random Variable.ppt
Expectation of Discrete Random Variable.ppt
 
Handling missing data with expectation maximization algorithm
Handling missing data with expectation maximization algorithmHandling missing data with expectation maximization algorithm
Handling missing data with expectation maximization algorithm
 
Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)
Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)
Mayo Slides: Part I Meeting #2 (Phil 6334/Econ 6614)
 
0202 fmc3
0202 fmc30202 fmc3
0202 fmc3
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
Unit-8.pdf
Unit-8.pdfUnit-8.pdf
Unit-8.pdf
 
Probability/Statistics Lecture Notes 4: Hypothesis Testing
Probability/Statistics Lecture Notes 4: Hypothesis TestingProbability/Statistics Lecture Notes 4: Hypothesis Testing
Probability/Statistics Lecture Notes 4: Hypothesis Testing
 
Ch5
Ch5Ch5
Ch5
 
1616 probability-the foundation of probability theory
1616 probability-the foundation of probability theory1616 probability-the foundation of probability theory
1616 probability-the foundation of probability theory
 
Proba stats-r1-2017
Proba stats-r1-2017Proba stats-r1-2017
Proba stats-r1-2017
 
Cs229 notes8
Cs229 notes8Cs229 notes8
Cs229 notes8
 
Stochastic Processes - part 6
Stochastic Processes - part 6Stochastic Processes - part 6
Stochastic Processes - part 6
 
eatonmuirheadsoaita
eatonmuirheadsoaitaeatonmuirheadsoaita
eatonmuirheadsoaita
 

Plus de Christian Robert

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceChristian Robert
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinChristian Robert
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?Christian Robert
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Christian Robert
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Christian Robert
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihoodChristian Robert
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)Christian Robert
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerChristian Robert
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Christian Robert
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussionChristian Robert
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceChristian Robert
 

Plus de Christian Robert (20)

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
 
discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like sampler
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
the ABC of ABC
the ABC of ABCthe ABC of ABC
the ABC of ABC
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
 

Dernier

Integumentary System SMP B. Pharm Sem I.ppt
Integumentary System SMP B. Pharm Sem I.pptIntegumentary System SMP B. Pharm Sem I.ppt
Integumentary System SMP B. Pharm Sem I.pptshraddhaparab530
 
How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17Celine George
 
Oppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmOppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmStan Meyer
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxAnupkumar Sharma
 
The Contemporary World: The Globalization of World Politics
The Contemporary World: The Globalization of World PoliticsThe Contemporary World: The Globalization of World Politics
The Contemporary World: The Globalization of World PoliticsRommel Regala
 
Active Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfActive Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfPatidar M
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management SystemChristalin Nelson
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxHumphrey A Beña
 
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfVirtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfErwinPantujan2
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...Nguyen Thanh Tu Collection
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)lakshayb543
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Seán Kennedy
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptxmary850239
 
ClimART Action | eTwinning Project
ClimART Action    |    eTwinning ProjectClimART Action    |    eTwinning Project
ClimART Action | eTwinning Projectjordimapav
 

Dernier (20)

Integumentary System SMP B. Pharm Sem I.ppt
Integumentary System SMP B. Pharm Sem I.pptIntegumentary System SMP B. Pharm Sem I.ppt
Integumentary System SMP B. Pharm Sem I.ppt
 
How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17
 
Oppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmOppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and Film
 
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptxLEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
 
The Contemporary World: The Globalization of World Politics
The Contemporary World: The Globalization of World PoliticsThe Contemporary World: The Globalization of World Politics
The Contemporary World: The Globalization of World Politics
 
Active Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfActive Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdf
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management System
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
 
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfVirtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...
 
Paradigm shift in nursing research by RS MEHTA
Paradigm shift in nursing research by RS MEHTAParadigm shift in nursing research by RS MEHTA
Paradigm shift in nursing research by RS MEHTA
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx
 
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptxFINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
 
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptxYOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
 
ClimART Action | eTwinning Project
ClimART Action    |    eTwinning ProjectClimART Action    |    eTwinning Project
ClimART Action | eTwinning Project
 

Reading Birnbaum's (1962) paper, by Li Chenlu

  • 1. On The Foundations Of Statistical Inference by ALLAN BIRNBAUM LI Chenlu 2013.01.22 1 / 33
  • 2. content 1 Introduction 2 / 33
  • 3. content 1 Introduction 2 Part1 Statistical Evidence The Principle of Sufficiency The Principle of Conditionality The Likelihood Principle 2 / 33
  • 4. content 1 Introduction 2 Part1 Statistical Evidence The Principle of Sufficiency The Principle of Conditionality The Likelihood Principle 3 Part2 Binary Experiments Finite Parameter Spaces More General Parameter Spaces Bayesian Methods: An Interpretation of the Principle of Insuffi- cient Reason 2 / 33
  • 5. content 1 Introduction 2 Part1 Statistical Evidence The Principle of Sufficiency The Principle of Conditionality The Likelihood Principle 3 Part2 Binary Experiments Finite Parameter Spaces More General Parameter Spaces Bayesian Methods: An Interpretation of the Principle of Insuffi- cient Reason 4 Conclusion 2 / 33
  • 6. Introduction The paper studies the likelihood principle(LP)and how the likelihood function can be used to mesure the evidence in the data about an unkown parameter. • The main aim of the paper is to show and discuss the implication of the fact that the LP is a consequence of the concepts of conditional frames of the reference and sufficiency. • The second aim of the paper is to describe how and why these principles are appropriate ways to characterize statistical evidence in parametric models for inference purposes. 3 / 33
  • 7. Part1 1 Statistical Evidence 2 The Principle of Sufficiency 3 The Principle of Conditionality 4 The Likelihood Principle 4 / 33
  • 8. Statistical Evidence An experiment E is defined as E={Ω,S,f(x,θ)},where f is a density,θ is the unknown parameter,Ω is the parameter space and S the sample space of outcomes x of E.The likelihood function determined by an observed outcome is Lx =f(x,θ) Birnbaum states that the central purpose of the paper is to clarify the essential structure and properites of statistical evidence,termed the evidential meaning of (E,x) and denoted by Ev(E,x),in various instances. Ev(E,x)is the evidence about θ supplied by x and E 5 / 33
  • 9. The Principle of Sufficiency The Principle of Sufficiency (S) Let E be any experiment,with sample space{x},and let t(x)be any sufficient statistic. Let E denote the derived experiment,having the same parameter space,such that when any outcome x of E is observed the corresponding outcome t = t(x) of E is observe.Then for each x,Ev (E , x) = Ev (E , t),where t = t(x) ∗ If t(x) is a sufficient statistic for θ, then any inference about θ should depend on the sample x only through the value t(x) 6 / 33
  • 10. The Principle of Sufficiency If x is any specified outcome of any specified experiment E,the likelihood function determined by x is the function of θ:cf(x,θ),where c is any positive constant value If for some positive constant c we have f (x, θ) = cg (y , θ),for all θ,x and y are said to determine the same likelihood function If two outcomes x,x of one experiment determine the same likelihood function,f(x,θ)=cf(x ,θ) for all θ ,then there exists a sufficient statistic t such that t(x) = t(x ) 7 / 33
  • 11. The Principle of Sufficiency lemma1 if two outcomes x, x of any experiment E determine the same likelihood function,then they have the same evidential meaning: Ev (E , x) = Ev (E , x ) 8 / 33
  • 12. The Principle of Conditionality the definiton of the mixture experiment An experiment E is called a mixture ,with componens{Eh },if it is mathematically equivalent to a two-stage experiment of the follow- ing form: 1 An observation h is taken on a random variable H having a fixed and know distribution G (G does not depend on unknow parameter values.) 2 The corresponding component experiment Eh is carried out ,yielding an outcomes xh Thus each outcomes of E is a pair(Eh ,xh ) 9 / 33
  • 13. The Principle of Conditionality The Principle of Conditionality (C) If an experiment E is a mixture G of components{Eh },with possible outcomes(Eh ,xh ),then Ev (E , (Eh , xh )) = Ev (Eh , xh ) That is,the evidential meaning of any outcome(Eh ,xh )of any ex- periment E having a mixture structure is the same as: the eviden- tial meaning of the corresponding outcome xh of the corresponding component experiment Eh ,ignoring otherwise the over-all structure of the original experiment E 10 / 33
  • 14. The Principle of Conditionality Exemple suppose that two instruments(h=1 or 2) are available for use in an experment,respectives probabilities p1 =0.73,p2 =0.27 of being s- elected for use. each instrument gives the observations y=1,or y=0. Consider the assertion :Ev(E,(E1 ,1))=Ev(E1 ,1),by accepting the experimental conditions,suppose that E leads to selection of the first instrument(h=1). . In the hypothetical situation,it would be prepared to report either(E1 ,0)or(E1 ,1)as a complete description of the statistical evidence obtained. 11 / 33
  • 15. The Principle of Conditionality Exemple For purpose of informative inference ,if y=1 is observed with the first instument,then the report (E1 ,1) seems to be an appropriate and complete description of the statistical evidence obtained. and the”more complete” report(E,(E1 ,1))seems to differ from it only by the addition of recognizably redundant elements irrelevant to the evidential meaning and evidential interpretation of this outcomes of E. 12 / 33
  • 16. The Likelihood Principle The Likelihood Principle (L) If E and E are any two experiments with a common parameter space,and if x and y any respective outcomes which determine likelihood functions satisfying f (x, θ) = cg (y , θ) for some positive constant c=c(x,y) and all θ,thenEv (E , x) = Ev (E , y ) That is ,the evidential meaning Ev(E,x)of any outcome x of any experiment E is characterized completely by the likelihood function cf(x,θ),and is otherwise independent of the structure of (E,x) 13 / 33
  • 17. The Likelihood Principle Lemma2 (S)and(C)⇐⇒(L) Prove⇐: •That(L)implies(C)follows immediately from the fact that in all cas- es the likelihood functions determined respectively by (E,(Eh ,xh ))and (Eh ,xh )are proportional. •That(L)implies(S)follows immediately from Lemma 1. 14 / 33
  • 18. The Likelihood Principle Lemma2 (S)and(C)⇐⇒(L) Prove⇒: Let E and E denote any two experiments,having the same parameter spaceΩ={θ},and represented by probability density functions f(x,θ),g(y,θ)on their respective sample spaces S={x},S ={y}.consider the mixture experiment E whose components are just E and E ,taken with equal probabilities.let z denote the sample point of E ,and let C denote any set of points z;then C=A B,where A⊂ S and B⊂ S 1 1 Prob(Z∈|θ)= 2 Prob(A|θ,E)+ 2 Prob(B|θ,E ) the probability density function representing E be denoted by:  1  2 f (x, θ) if z=x∈ S, h= 1  2 g (y , θ) if z=y∈ S From(C),it follows that:Ev(E ,(E,x))=Ev(E,x), for each x∈ S . Ev(E ,(E ,y))=Ev(E ,y),for each y∈ S (a) 15 / 33
  • 19. The Likelihood Principle Prove⇒: Let x y be any two outcomes of E,E respectively which determine the same likelihood function : f(x,θ)=cg(y,θ) for all θ. where c is some positive constant.Then we have h(x,θ)=ch(y,θ) for all θ, the two outcomes(E,x),(E ,y)of E determine the same likelihood function.Then it fol- lows from(S)and Lemma1: Ev(E ,(E,x))=Ev(E ,(E ,y)) (b) from(a)and(b)it follows that: Ev(E,x)=Ev(E ,y). The consequence states that any two outcomes x,y of any two experiments E,E (with the same parameter space)have the same evidential meaning if they determine the same likelihood function. 16 / 33
  • 20. The Likelihood Principle impact of the principle •The implication⇒ is the most important part of the equiva- lence,because this means that if you do not accept(L),you have to discard either(S)or(C),two widely accepted principles. •The most important consequence of (L) seems to be that evidential measures based on a specific experimental frame of refer- ece(like p-values and confidence levels) are somewhat unsatisfactory. • In other words, (L) eliminates the need to consider the sample space or any part of it once the data are observed.Lemma 2 truly was a ”breakthrough” in the foundations of statistical inference and made (L) stand on its own ground,independent of a Bayesian argument. 17 / 33
  • 21. Part2 1 Binary Experiments 2 Finite Parameter Spaces 3 More General Parameter Spaces 4 Bayesian Methods 18 / 33
  • 22. Binary Experiments let Ω=(θ1 ,θ2 ).In this case,(L)means that all information lie in the likelihood ratio, λ(x)=f(x,θ2 )/f(x,θ1 ). The question is now what evidential meaning we can attach to the numberλ(x)? To answer this,Birnbaum first considers a binary experiment in which the sample space has only two points.denoted(+)and(-),and 1 such that p(+|θ1 )=p(-|θ2 )=α for an α ≤ 2 . Such an experiment is called a symmetric simple binary experiment and is characterized by the”error” probability α. 19 / 33
  • 23. Binary Experiments For such an experiment,λ(+)=(1-α)/α ≥ 1 ,α=1/(1+λ(+)) andλ(-)= α/(1-α) ≤ 1.The important point now is that according to (L),two experiments with the same value of λ have the same evidential meaning about the value of α . Therefore,the evidential meaning of λ(x)≥1 from any binary experiment E is the same as the evidential meaning of the (+)outcome from a symmetric simple binary experiment with α(x)=1/(1+λ(x)). α(x)is called the intrinsic signigicance level and is measure of evidence that satisfies(L). 20 / 33
  • 24. Finite Parameter Spaces If E is any experiment with a parameter space containing only a finite number k of points,θ=i=1,2...k. Any observed outcome x of E determines a likelihood function L(i)=cf(x,i),i=1,....k. We can assume that k L(i)=1 i=1 Any experiment E with a finite sample space j=1,...m, and finite parameter space is represented by a stochastic matrix   p11 . . . p1m E = (Pij ) =  . .. .   . .  . . . pk1 . . . pkm m where j=1 Pij =1 and pij =Prob[j|i],for each i,j.Here the i th row is the discrete probability distribution pij given by parameter value i,and the j th column is proportional to the likelihooh function L(i)=L(i|j)=cpij ,i=1,...k,determined by outcome j. 21 / 33
  • 25. Finite Parameter Spaces:Qualitative evidential interpretation Exemple1: experiment with only two points j=1,2. we can define Prob[j=1|i]=L(i) and Prob[j=2|i]=1-L(i).For i=1,...k. for exemple,the likelihood function L(i)= 1 ,i=1,2,3 represents the 3 possible outcome j=1 of the experiment  1 2  3 3   1 2   E=  3 3     1 2 3 3 •Since this experiment gives the same distribution on the two-point sample space under esch hypothesis,it is completely uninformative.According to the likelihood principle,we can therefore conclude that the given likelihood function has a simple evidential inter- pretation,regardless of the structure of the experiment,that is represents a completely uninformative outcome. 22 / 33
  • 26. Finite Parameter Spaces:Qualitative evidential interpretation Exemple2: The likelihood function( 1 , 1 ,0)(that is ,L(1)=L(2)= 1 ,L(3)=0,on the 2 2 2 3-points parameter spacei=1,2,3.) this represents the possible outcome j=1 of the experiment  1 1  2 2   1 1   E=  2 2     0 1 •this outcome of E is impossible under i=3,and hence supports without risk of error the conclusion that i=3 • E prescribes identical distributions under i=1,and 2.and hence the experiment E ,and each of its possible outcomes ,is completely uninformative as between i=1,and 2. 23 / 33
  • 27. Finite Parameter Spaces:Qualitative evidential interpretation Exemple 3:some likelihood functions on a given parameter space can be compared and ordered in a natural way Consider the likelihood functions (0.8,0.1,0.1) and (0.45,0.275,0.275) The interpretation that the first is more informative than the second is supported as follows:   0.8 0.2 E =  0.1 0.9  = (Pij )   0.1 0.9 when outcome j=2 of E is observed,we report w=1 with probability 1 ,w=2 with 2 probability 1 .when outcome j=1 of E is observed, the report w=1 is given. 2   0.9 0.1   E =  0.55  0.45  = (Piw )  0.55 0.45 The experiment E is less informative than E 24 / 33
  • 28. Finite Parameter Spaces:Intrinsic confidence methods Exemple 4 consider the likelihood function(0.9,0.09,0.01)defined on the param- eter space i=1,2,3.This represents the possible outcome j=1 of the experiment.   0.9 0.01 0.09     E =  0.09 0.9 0.01  = (Pij )     0.01 0.09 0.9 •In this experiment,a confidence set estimator of the parameter i is given by taking,for each possible outcomes j,the two values of i having greatest likelihoods L(i | j). •we can verify that under each value of i, the probability is 0.99 that the confidence sets determined in this way have confidence coefficient 0.99. 25 / 33
  • 29. Finite Parameter Spaces:Intrinsic confidence methods The general form of the intrinsic confidence methods for any likelihood function L(i) defined on a finite parameter space i=1,...k,and such that k L(i)=1 i=1 if there is a unique least likely value i1 of i,let c1 =1-L(i1 ).Then the remaining (k-1)parameter points will be called an intrinsic confidence set with intrinsic confidence coefficient c1 ;If there is a pair of values of i,say i1 ,i2 ,with likelihoods strictly smaller than those of the remainning (k-2) points,call the latter set of points an intrinsic confidence set,with intrinsic confidence level c2 =1-L(i1 )-L(i2 ).and so on.   L(1) L(k) L(k − 1) . . .    L(2) L(1) L(k) . . .      E =  L(3) L(2) L(1) . . .  = (Pij )     . . . . . .     . . .   L(k) L(k − 1) L(k − 2) . . . 26 / 33
  • 30. Finite Parameter Spaces on the finite parameter space: •For finite parameter spaces,significance levels, confidence sets,and confidence levels can be based on the observed Lx (θ),hence satisfying(L),defined as regular such methods and concepts for a constructed experiment with a likelihood function identical to Lx (θ). •Therefore,in the case of finite parameter spaces,a clear and logical evidential interpretation of the likelihood function can be given through intrinsic methods and concepts. 27 / 33
  • 31. More General Parameter Spaces • This section deals mainly with the case where Ω is the real line. Given E,x,and Lx (θ),a hypothetical experiment E consisting of a single observation of Y with density g(y,θ)=cLx (θ-y)is then constructed. • Then(E,x)has the same likelihood function as (E ,0),and(L) implies that the same inference should be used in (E,x)as in (E ,0). For exemple,if a regular(1-α) confidence interval in E is used, then this interval estimate(for y=0)should be the one used also for (E,x) and is called a (1-α) intrinsic confidence interval for (E,x). 28 / 33
  • 32. More General Parameter Spaces As a general comment,Birnbaum emphasizes that intrinsic methods and concepts can ,in light of (L),be nothing more than methods of expressing evidential meaning already implicit in Lx (θ)itself. In the discussion,Birnbaum does not recommend intrinsic methods as statistical methods in practice.The value of these methods is conceptual,and the main use of intrinsic concepts is to show that likelihood functions as such are evidentially meaningful. 29 / 33
  • 33. Bayesian Methods:An Interpretation of the principle of Insufficient Reason Birnbaum views the Bayes approach as not directed to informative inference,but rather as a way to determine an appropriate final synthesis of availabe,information based on prior availbale information and data.It is observed that in determining the postrior distribution ,the contribution of the data and E is L x (θ)only,so the Bayes approach implies(L). 30 / 33
  • 34. Conclusion •Birnbaum’s main result,that LP follows from sufficiency and conditionality principles that most statisticians accept,must be regarded as one of the deepest theorems of theoretical statistics, yet the proof is unbelievably simple. •The result had a decisive influence on how many statisticians came to view the likelihood function as a basic quantity in statistical analysis. •It has also affected in a general way how we view the science of statistics.Birnbaum introduced principles of equivalence within and between experiments, showing various relationships between these principles.This made it possible to disccuss the different concepts from alternative viewpoints. 31 / 33
  • 35. References Allan Birnbaum ,”On the foundations of statistical inference:binary experiment” Institute of Mathematical Sciences,New York Uni- versity Daniel Steel ,”Beyasian confirmation theory and the likelihood principle” Michigan State University Royall,R” Statistical Evidence:A likelihood paradigm” Chapman and Hall,London Jan F Bjφrnstad,” Breakthroughs in Statistics Volume I -foundationa ans Basic Theory” The university of Trondheim 32 / 33