SlideShare une entreprise Scribd logo
1  sur  47
Validity and
 Reliability
Validity
Definition

How well a survey measures what it sets out to
measure.
Validity can be determined only if there is a
reference procedure of “gold standard”.
Food–frequency questionnaires food diaries
Birth weight       hospital record.
validity
Three method:
1- content validity
2-criterion –related validity
3-construct validity
Screening test

Validity – get the correct result
Sensitive – correctly classify cases
Specificity – correctly classify non-cases
[screening and diagnosis are not identical]
Validity: 1) Sensitivity

Probability (proportion) of
correct classification of cases

             /
Cases found all cases
Validity: 2) Specificity

Probability (proportion) of
correct classification of noncases

                   /
Noncases identified all noncases
2 cases / month

                       OO

    OO                 O

             O


  O

  O               O OO


         O


Pre-detectable   preclinical   clinical   old
                        OO
 
    OO                  O
 

        O     O
 
          O
 
  O         O
 
   O           O    OO
 
          O
 
 
Pre-detectable   pre-clinical   clinical   old
       O     O O       OO

    OO O O      O      O

 O     O     O O

    O O            O O

  O    O   O   O       O

  O O             OOO

O        O   O      O O

    O O   O     O

What is the prevalence of “the condition”?
       O     O O       OO


    OO O O      O      O

 O     O     O O

    O O            O O

  O    O   O   O       O

  O O             OOO

O        O   O      O O

    O O   O     O

Sensitivity of a screening test

Probability (proportion) of
correct classification of detectable, pre-
clinical cases
Pre-detectable   pre-clinical    clinical    old
    (8)           (10)          (6)         (14)
        O     O O       OO
 

     OO O O      O      O
 

  O     O     O O
 

     O O            O O
 

   O    O   O   O       O
 

   O O             OOO
 

 O        O   O      O O
 

     O O   O     O
 
Correctly classified
Sensitivity: –––––––––––––––––––––––––––
              Total detectable pre-clinical (10)
        O     O O       OO
 

     OO O O      O      O
 

  O     O     O O
 

     O O            O O
 

   O    O   O   O       O
 

   O O             OOO
 

 O        O   O      O O
 

     O O   O     O
 
Specificity of a screening test

      Probability (proportion) of
  correct classification of noncases

 Noncases identified / all noncases
Pre-detectable   pre-clinical   clinical    old
      (8)            (10)        (6)        (14)
       O     O O       OO


    OO O O      O      O


 O     O     O O


    O O            O O


  O    O   O   O       O


  O O             OOO


O        O   O      O O


    O O   O     O

Correctly classified
Specificity: –––––––––––––––––––––––––––––
          Total non-cases (& pre-detect) (162 or 170)

       O     O O       OO


    OO O O      O      O


 O     O     O O


    O O            O O


  O    O   O   O       O


  O O             OOO


O        O   O      O O


    O O   O     O

True Disease Status
                           Cases Non-cases
                        True         False
            Positive   positive    positive   a+b
Screening
                                ab
   Test                        c d True
 Results                  False               c+d
            Negative               negative
                       negative
                             a+c     b+d
               True positives     a
 Sensitivity =                =
                 All cases       a+c
               True negatives     d
 Specificity =                =
                All non-cases    b+d
True Disease Status
                           Cases Non-cases

            Positive    140         1,000       1,140
Screening
                              ab
   Test                       c d
 Results                            19,000     19,060
            Negative     60

                             200      20,000
              True positives    140
Sensitivity =                 =       = 70%
                 All cases       200
Specificity = True negatives = 19,000 = 95%
               All non-cases      20,000
Interpreting test results: predictive value

 Probability (proportion) of those tested who
 are correctly classified
 Cases identified / all positive tests

 Noncases identified / all negative tests
True Disease Status
                           Cases Non-cases
                        True         False
            Positive   positive    positive   a+b
Screening
                                ab
   Test                        c d True
 Results                  False               c+d
            Negative               negative
                       negative
                          a+c      b+d
            True positives    a
      PPV =                =
             All positives   a+b
            True negatives    d
      NPV =                =
             All negatives   c+d
True Disease Status
                             Cases Non-cases

            Positive    140           1,000     1,140
Screening
                                ab
   Test                         c d
 Results                              19,000   19,060
            Negative     60

                          200        20,000
            True positives       140
      PPV =                  =         = 12.3%
             All positives      1,140
                                 19,000
      NPV = True negatives    =          = 99.7%
             All negatives       19,060
Confidence interval
point estimate+_[1.96*SE(estimate)]
SE(sensivity)=√P(1-P)
                 N
SE=0.013
0.70-(1.96*0.013)=0.67
0.70+(1.96*0.013)=0.95
Receiver operating characteristic (ROC) curve

Not aIl tests give a simple yes/no result. Some
yield results that are numerical values along a
continuous scale of measurement. in these
situations, high sensitivity is obtained at the
cost of low specificity and vice versa
Reliability
Reliability
Repeatability – get same result
Each time
From each instrument
From each rater
If don’t know correct result, then can
examine reliability only.
Definition

The degree of stability exhibited when a
measurement is repeated under identical
conditions

Lack of reliability may arise from divergences
between observers or instruments of
measurement or instability of the attribute
being measured
      (from Last. Dictionary of Epidemiology.
Assessment of reliability
Test-Retest Reliability
Equivalence
 Internal Consistency
spss
Reliability: Kappa
EXAMPLE OF PERCENT
        AGREEMENT

Two physicians are each given a
set of 100 X-rays to look at independently
and asked to judge whether pneumonia is
present or absent. When both sets of
diagnoses are tallied, it is found that 95%
of the diagnoses are the same.
IS PERCENT AGREEMENT GOOD
         ENOUGH?


 Do these two physicians exhibit high
   diagnostic reliability?

 Can there be 95% agreement between
two observers without really having
good reliability?
Compare the two tables below:
     Table 2               Table 1
               MD#1                    MD#1

              Yes   No                Yes   No

        Yes    1     3          Yes   43     3
 MD#2                    MD#2
        No     2    94          No     2    52



In both instances, the physicians agree
95% of the time. Are the two physicians
equally reliable in the two tables?
USE OF THE KAPPA
   STATISTIC TO ASSESS
       RELIABILITY

Kappa is a widely used test of
inter or intra-observer agreement
(or reliability) which corrects for
chance agreement.
KAPPA VARIES FROM + 1 to - 1
+ 1 means that the two observers are perfectly
reliable. They classify everyone exactly the
   same way.

0 means there is no relationship at all
between the two observer’s
classifications, above the agreement that
would be expected by chance.

- 1 means the two observers classify exactly the
opposite of each other. If one observer says
yes, the other always says no.
GUIDE TO USE OF KAPPAS IN
EPIDEMIOLOGY AND MEDICINE

Kappa > .80 is considered excellent
Kappa .60 - .80 is considered good
Kappa .40 - .60 is considered fair
Kappa < .40 is considered poor
WAY TO CALCULATE KAPPA


1. Calculate observed agreement (cells in
which the observers agree/total cells). In
both table 1 and table 2 it is 95%

2. Calculate expected agreement (chance
agreement) based on the marginal totals
Table 1’s marginal totals are:

                     MD#1
  OBSERVED
               Yes      No

         Yes    1       3     4
  MD#2
         No     2       94   96

                3       97   100
OBSERVED     MD #1        How do we calculate •
                             the N expected by
                          chance in each cell?
           Yes No
                               We assume that •
MD#2   Yes 1   3  4            each cell should
       No 2 94 96          reflect the marginal
                          distributions, i.e. the
            3 97 100
                              proportion of yes
                               and no answers
EXPECTED     MD #1         should be the same
                            within the four-fold
                                 table as in the
             Yes No             marginal totals.
MD#2   Yes           4
       No            96
             3   97 100
To do this, we find the proportion of answers in either
 the column (3% and 97%, yes and no respectively for
   MD #1) or row (4% and 96% yes and no respectively
   for MD #2) marginal totals, and apply one of the two
  proportions to the other marginal total. For example,
         96% of the row totals are in the “No” category.
   Therefore, by chance 96% of MD #1’s “No’s” should
       also be in the “No” column. 96% of 97 is 93.12.
                             MD#1
       EXPECTED
                        Yes      No
       MD#2      Yes                      4
                 No             93.12     96
                         3       97      100
By subtraction, all other cells fill in automatically,
and each yes/no distribution reflects the marginal
distribution. Any cell could have been used to make
the calculation, because once one cell is specified in
a 2x2 table with fixed marginal distributions, all
other cells are also specified.

        EXPECTED            MD #1

                           Yes    No
       MD#2         Yes   0.12   3.88     4
                    No    2.88   93.12   96
                            3     97     100
Now you can see that just by the operation
 of chance, 93.24 of the 100 observations
  should have been agreed to by the two
        observers. (93.12 + 0.12)


     EXPECTED             MD #1

                        Yes     No
    MD#2         Yes    0.12   3.88     4
                 No     2.88   93.12   96
                         3      97     100
Below is the formula for calculating Kappa
  from expected agreement


Observed agreement - Expected Agreement
1 - Expected Agreement

95% - 93.24% = 1.76% = .26
1 - 93.24%      6.76%
How good is a Kappa of 0.26?

Kappa > .80 is considered excellent
 Kappa .60 - .80 is considered good
  Kappa .40 - .60 is considered fair
   Kappa < .40 is considered poor
In the second example, the observed
agreement was also 95%, but the marginal
         totals were very different




      ACTUAL            MD #1

                       Yes   No
    MD#2        Yes                 46
                No                  54
                       45     55    100
Using the same procedure as before, we calculate the expected
N in any one cell, based on the marginal totals. For example, the
                       lower right cell is 54% of 55, which is 29.7




          ACTUAL                        MD #1

                                     Yes        No
      MD#2                Yes                              46
                          No                   29.7        54
                                      45        55        100
And, by subtraction the other cells are
as below. The cells which indicate
agreement are highlighted in yellow,
and add up to 50.4%


      ACTUAL            MD #1

                      Yes    No
    MD#2        Yes   20.7   25.3   46
                No    24.3   29.7   54
                       45    55     100
Enter the two agreements into the formula

   Observed agreement - Expected Agreemen
1 - Expected Agreement

95% - 50.4% = 44.6% = .90
1 - 50.4%      49.6%

  In this example, the observers have the
     same % agreement, but now they are
            much different from chance.
    Kappa of 0.90 is considered excellent

Contenu connexe

Similaire à Validity

Tests of diagnostic accuracy
Tests of diagnostic accuracyTests of diagnostic accuracy
Tests of diagnostic accuracy
Simba Takuva
 
Epidemiological method to determine utility of a diagnostic test
Epidemiological method to determine utility of a diagnostic testEpidemiological method to determine utility of a diagnostic test
Epidemiological method to determine utility of a diagnostic test
Bhoj Raj Singh
 
Diagnostic testing 2009
Diagnostic testing 2009Diagnostic testing 2009
Diagnostic testing 2009
coolboy101pk
 
Epidemiological Approaches for Evaluation of diagnostic tests.pptx
Epidemiological Approaches for Evaluation of diagnostic tests.pptxEpidemiological Approaches for Evaluation of diagnostic tests.pptx
Epidemiological Approaches for Evaluation of diagnostic tests.pptx
Bhoj Raj Singh
 
OAJ presentation final draft
OAJ presentation final draftOAJ presentation final draft
OAJ presentation final draft
Brian Eisen
 

Similaire à Validity (20)

Tests of diagnostic accuracy
Tests of diagnostic accuracyTests of diagnostic accuracy
Tests of diagnostic accuracy
 
Screening test (basic concepts)
Screening test (basic concepts)Screening test (basic concepts)
Screening test (basic concepts)
 
Diagnostic test
Diagnostic test Diagnostic test
Diagnostic test
 
Diagnotic and screening tests
Diagnotic and screening testsDiagnotic and screening tests
Diagnotic and screening tests
 
2016 veterinary diagnostics
2016 veterinary diagnostics2016 veterinary diagnostics
2016 veterinary diagnostics
 
screening and diagnostic testing
screening and diagnostic  testingscreening and diagnostic  testing
screening and diagnostic testing
 
Epidemiological method to determine utility of a diagnostic test
Epidemiological method to determine utility of a diagnostic testEpidemiological method to determine utility of a diagnostic test
Epidemiological method to determine utility of a diagnostic test
 
Diagnostic testing 2009
Diagnostic testing 2009Diagnostic testing 2009
Diagnostic testing 2009
 
05 diagnostic tests cwq
05 diagnostic tests cwq05 diagnostic tests cwq
05 diagnostic tests cwq
 
VALIDITY AND RELIABLITY OF A SCREENING TEST seminar 2.pptx
VALIDITY AND RELIABLITY OF A SCREENING TEST seminar 2.pptxVALIDITY AND RELIABLITY OF A SCREENING TEST seminar 2.pptx
VALIDITY AND RELIABLITY OF A SCREENING TEST seminar 2.pptx
 
Validity and Screening Test
Validity and Screening TestValidity and Screening Test
Validity and Screening Test
 
Epidemiological Approaches for Evaluation of diagnostic tests.pptx
Epidemiological Approaches for Evaluation of diagnostic tests.pptxEpidemiological Approaches for Evaluation of diagnostic tests.pptx
Epidemiological Approaches for Evaluation of diagnostic tests.pptx
 
OAJ presentation final draft
OAJ presentation final draftOAJ presentation final draft
OAJ presentation final draft
 
Sensitivity, specificity, positive and negative predictive
Sensitivity, specificity, positive and negative predictiveSensitivity, specificity, positive and negative predictive
Sensitivity, specificity, positive and negative predictive
 
Dr Amit Diagnostic Tests.pptx
Dr Amit Diagnostic Tests.pptxDr Amit Diagnostic Tests.pptx
Dr Amit Diagnostic Tests.pptx
 
Evidence Based Diagnosis
Evidence Based DiagnosisEvidence Based Diagnosis
Evidence Based Diagnosis
 
Screening in Public Health
Screening in Public HealthScreening in Public Health
Screening in Public Health
 
Labs, damn lies, and statistics
Labs, damn lies, and statisticsLabs, damn lies, and statistics
Labs, damn lies, and statistics
 
session three epidemiology.pptx
session three epidemiology.pptxsession three epidemiology.pptx
session three epidemiology.pptx
 
session three epidemiology.pptx
session three epidemiology.pptxsession three epidemiology.pptx
session three epidemiology.pptx
 

Plus de mums1

Oranges
OrangesOranges
Oranges
mums1
 
Oranges
OrangesOranges
Oranges
mums1
 
چای سبز
چای سبزچای سبز
چای سبز
mums1
 
سلامت زنان
سلامت زنانسلامت زنان
سلامت زنان
mums1
 
رژیم غذایی مدیترانه ای
رژیم غذایی مدیترانه ایرژیم غذایی مدیترانه ای
رژیم غذایی مدیترانه ای
mums1
 
خواب ایمن
خواب ایمنخواب ایمن
خواب ایمن
mums1
 
سرطان
سرطانسرطان
سرطان
mums1
 

Plus de mums1 (7)

Oranges
OrangesOranges
Oranges
 
Oranges
OrangesOranges
Oranges
 
چای سبز
چای سبزچای سبز
چای سبز
 
سلامت زنان
سلامت زنانسلامت زنان
سلامت زنان
 
رژیم غذایی مدیترانه ای
رژیم غذایی مدیترانه ایرژیم غذایی مدیترانه ای
رژیم غذایی مدیترانه ای
 
خواب ایمن
خواب ایمنخواب ایمن
خواب ایمن
 
سرطان
سرطانسرطان
سرطان
 

Dernier

Call Girl in Indore 8827247818 {LowPrice} ❤️ (ahana) Indore Call Girls * UPA...
Call Girl in Indore 8827247818 {LowPrice} ❤️ (ahana) Indore Call Girls  * UPA...Call Girl in Indore 8827247818 {LowPrice} ❤️ (ahana) Indore Call Girls  * UPA...
Call Girl in Indore 8827247818 {LowPrice} ❤️ (ahana) Indore Call Girls * UPA...
mahaiklolahd
 
Russian Call Girls Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service...
Russian Call Girls Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service...Russian Call Girls Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service...
Russian Call Girls Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service...
adilkhan87451
 
Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...
Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...
Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...
chetankumar9855
 

Dernier (20)

Mumbai ] (Call Girls) in Mumbai 10k @ I'm VIP Independent Escorts Girls 98333...
Mumbai ] (Call Girls) in Mumbai 10k @ I'm VIP Independent Escorts Girls 98333...Mumbai ] (Call Girls) in Mumbai 10k @ I'm VIP Independent Escorts Girls 98333...
Mumbai ] (Call Girls) in Mumbai 10k @ I'm VIP Independent Escorts Girls 98333...
 
Call Girls Service Jaipur {8445551418} ❤️VVIP BHAWNA Call Girl in Jaipur Raja...
Call Girls Service Jaipur {8445551418} ❤️VVIP BHAWNA Call Girl in Jaipur Raja...Call Girls Service Jaipur {8445551418} ❤️VVIP BHAWNA Call Girl in Jaipur Raja...
Call Girls Service Jaipur {8445551418} ❤️VVIP BHAWNA Call Girl in Jaipur Raja...
 
Best Rate (Patna ) Call Girls Patna ⟟ 8617370543 ⟟ High Class Call Girl In 5 ...
Best Rate (Patna ) Call Girls Patna ⟟ 8617370543 ⟟ High Class Call Girl In 5 ...Best Rate (Patna ) Call Girls Patna ⟟ 8617370543 ⟟ High Class Call Girl In 5 ...
Best Rate (Patna ) Call Girls Patna ⟟ 8617370543 ⟟ High Class Call Girl In 5 ...
 
Premium Call Girls In Jaipur {8445551418} ❤️VVIP SEEMA Call Girl in Jaipur Ra...
Premium Call Girls In Jaipur {8445551418} ❤️VVIP SEEMA Call Girl in Jaipur Ra...Premium Call Girls In Jaipur {8445551418} ❤️VVIP SEEMA Call Girl in Jaipur Ra...
Premium Call Girls In Jaipur {8445551418} ❤️VVIP SEEMA Call Girl in Jaipur Ra...
 
Saket * Call Girls in Delhi - Phone 9711199012 Escorts Service at 6k to 50k a...
Saket * Call Girls in Delhi - Phone 9711199012 Escorts Service at 6k to 50k a...Saket * Call Girls in Delhi - Phone 9711199012 Escorts Service at 6k to 50k a...
Saket * Call Girls in Delhi - Phone 9711199012 Escorts Service at 6k to 50k a...
 
Call Girls Raipur Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Raipur Just Call 9630942363 Top Class Call Girl Service AvailableCall Girls Raipur Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Raipur Just Call 9630942363 Top Class Call Girl Service Available
 
Call Girl in Indore 8827247818 {LowPrice} ❤️ (ahana) Indore Call Girls * UPA...
Call Girl in Indore 8827247818 {LowPrice} ❤️ (ahana) Indore Call Girls  * UPA...Call Girl in Indore 8827247818 {LowPrice} ❤️ (ahana) Indore Call Girls  * UPA...
Call Girl in Indore 8827247818 {LowPrice} ❤️ (ahana) Indore Call Girls * UPA...
 
Call Girls in Delhi Triveni Complex Escort Service(🔝))/WhatsApp 97111⇛47426
Call Girls in Delhi Triveni Complex Escort Service(🔝))/WhatsApp 97111⇛47426Call Girls in Delhi Triveni Complex Escort Service(🔝))/WhatsApp 97111⇛47426
Call Girls in Delhi Triveni Complex Escort Service(🔝))/WhatsApp 97111⇛47426
 
Russian Call Girls Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service...
Russian Call Girls Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service...Russian Call Girls Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service...
Russian Call Girls Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service...
 
Call Girls Hosur Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Hosur Just Call 9630942363 Top Class Call Girl Service AvailableCall Girls Hosur Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Hosur Just Call 9630942363 Top Class Call Girl Service Available
 
Night 7k to 12k Chennai City Center Call Girls 👉👉 7427069034⭐⭐ 100% Genuine E...
Night 7k to 12k Chennai City Center Call Girls 👉👉 7427069034⭐⭐ 100% Genuine E...Night 7k to 12k Chennai City Center Call Girls 👉👉 7427069034⭐⭐ 100% Genuine E...
Night 7k to 12k Chennai City Center Call Girls 👉👉 7427069034⭐⭐ 100% Genuine E...
 
Night 7k to 12k Navi Mumbai Call Girl Photo 👉 BOOK NOW 9833363713 👈 ♀️ night ...
Night 7k to 12k Navi Mumbai Call Girl Photo 👉 BOOK NOW 9833363713 👈 ♀️ night ...Night 7k to 12k Navi Mumbai Call Girl Photo 👉 BOOK NOW 9833363713 👈 ♀️ night ...
Night 7k to 12k Navi Mumbai Call Girl Photo 👉 BOOK NOW 9833363713 👈 ♀️ night ...
 
Call Girls Rishikesh Just Call 9667172968 Top Class Call Girl Service Available
Call Girls Rishikesh Just Call 9667172968 Top Class Call Girl Service AvailableCall Girls Rishikesh Just Call 9667172968 Top Class Call Girl Service Available
Call Girls Rishikesh Just Call 9667172968 Top Class Call Girl Service Available
 
Call Girls Madurai Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Madurai Just Call 9630942363 Top Class Call Girl Service AvailableCall Girls Madurai Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Madurai Just Call 9630942363 Top Class Call Girl Service Available
 
Call Girls Service Jaipur {9521753030} ❤️VVIP RIDDHI Call Girl in Jaipur Raja...
Call Girls Service Jaipur {9521753030} ❤️VVIP RIDDHI Call Girl in Jaipur Raja...Call Girls Service Jaipur {9521753030} ❤️VVIP RIDDHI Call Girl in Jaipur Raja...
Call Girls Service Jaipur {9521753030} ❤️VVIP RIDDHI Call Girl in Jaipur Raja...
 
Call Girls Rishikesh Just Call 8250077686 Top Class Call Girl Service Available
Call Girls Rishikesh Just Call 8250077686 Top Class Call Girl Service AvailableCall Girls Rishikesh Just Call 8250077686 Top Class Call Girl Service Available
Call Girls Rishikesh Just Call 8250077686 Top Class Call Girl Service Available
 
9630942363 Genuine Call Girls In Ahmedabad Gujarat Call Girls Service
9630942363 Genuine Call Girls In Ahmedabad Gujarat Call Girls Service9630942363 Genuine Call Girls In Ahmedabad Gujarat Call Girls Service
9630942363 Genuine Call Girls In Ahmedabad Gujarat Call Girls Service
 
Call Girls Hyderabad Just Call 8250077686 Top Class Call Girl Service Available
Call Girls Hyderabad Just Call 8250077686 Top Class Call Girl Service AvailableCall Girls Hyderabad Just Call 8250077686 Top Class Call Girl Service Available
Call Girls Hyderabad Just Call 8250077686 Top Class Call Girl Service Available
 
Coimbatore Call Girls in Coimbatore 7427069034 genuine Escort Service Girl 10...
Coimbatore Call Girls in Coimbatore 7427069034 genuine Escort Service Girl 10...Coimbatore Call Girls in Coimbatore 7427069034 genuine Escort Service Girl 10...
Coimbatore Call Girls in Coimbatore 7427069034 genuine Escort Service Girl 10...
 
Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...
Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...
Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...
 

Validity

  • 3. Definition How well a survey measures what it sets out to measure. Validity can be determined only if there is a reference procedure of “gold standard”. Food–frequency questionnaires food diaries Birth weight hospital record.
  • 4. validity Three method: 1- content validity 2-criterion –related validity 3-construct validity
  • 5. Screening test Validity – get the correct result Sensitive – correctly classify cases Specificity – correctly classify non-cases [screening and diagnosis are not identical]
  • 6. Validity: 1) Sensitivity Probability (proportion) of correct classification of cases / Cases found all cases
  • 7. Validity: 2) Specificity Probability (proportion) of correct classification of noncases / Noncases identified all noncases
  • 8. 2 cases / month OO  OO O  O   O  O O OO  O  
  • 9. Pre-detectable preclinical clinical old OO  OO O  O O  O  O O  O O OO  O  
  • 10. Pre-detectable pre-clinical clinical old O O O OO  OO O O O O  O O O O  O O O O  O O O O O  O O OOO  O O O O O  O O O O 
  • 11. What is the prevalence of “the condition”? O O O OO  OO O O O O  O O O O  O O O O  O O O O O  O O OOO  O O O O O  O O O O 
  • 12. Sensitivity of a screening test Probability (proportion) of correct classification of detectable, pre- clinical cases
  • 13. Pre-detectable pre-clinical clinical old (8) (10) (6) (14) O O O OO  OO O O O O  O O O O  O O O O  O O O O O  O O OOO  O O O O O  O O O O 
  • 14. Correctly classified Sensitivity: ––––––––––––––––––––––––––– Total detectable pre-clinical (10) O O O OO  OO O O O O  O O O O  O O O O  O O O O O  O O OOO  O O O O O  O O O O 
  • 15. Specificity of a screening test Probability (proportion) of correct classification of noncases Noncases identified / all noncases
  • 16. Pre-detectable pre-clinical clinical old (8) (10) (6) (14) O O O OO  OO O O O O  O O O O  O O O O  O O O O O  O O OOO  O O O O O  O O O O 
  • 17. Correctly classified Specificity: ––––––––––––––––––––––––––––– Total non-cases (& pre-detect) (162 or 170) O O O OO  OO O O O O  O O O O  O O O O  O O O O O  O O OOO  O O O O O  O O O O 
  • 18. True Disease Status Cases Non-cases True False Positive positive positive a+b Screening ab Test c d True Results False c+d Negative negative negative a+c b+d True positives a Sensitivity = = All cases a+c True negatives d Specificity = = All non-cases b+d
  • 19. True Disease Status Cases Non-cases Positive 140 1,000 1,140 Screening ab Test c d Results 19,000 19,060 Negative 60 200 20,000 True positives 140 Sensitivity = = = 70% All cases 200 Specificity = True negatives = 19,000 = 95% All non-cases 20,000
  • 20. Interpreting test results: predictive value Probability (proportion) of those tested who are correctly classified Cases identified / all positive tests Noncases identified / all negative tests
  • 21. True Disease Status Cases Non-cases True False Positive positive positive a+b Screening ab Test c d True Results False c+d Negative negative negative a+c b+d True positives a PPV = = All positives a+b True negatives d NPV = = All negatives c+d
  • 22. True Disease Status Cases Non-cases Positive 140 1,000 1,140 Screening ab Test c d Results 19,000 19,060 Negative 60 200 20,000 True positives 140 PPV = = = 12.3% All positives 1,140 19,000 NPV = True negatives = = 99.7% All negatives 19,060
  • 23. Confidence interval point estimate+_[1.96*SE(estimate)] SE(sensivity)=√P(1-P) N SE=0.013 0.70-(1.96*0.013)=0.67 0.70+(1.96*0.013)=0.95
  • 24. Receiver operating characteristic (ROC) curve Not aIl tests give a simple yes/no result. Some yield results that are numerical values along a continuous scale of measurement. in these situations, high sensitivity is obtained at the cost of low specificity and vice versa
  • 26. Reliability Repeatability – get same result Each time From each instrument From each rater If don’t know correct result, then can examine reliability only.
  • 27. Definition The degree of stability exhibited when a measurement is repeated under identical conditions Lack of reliability may arise from divergences between observers or instruments of measurement or instability of the attribute being measured (from Last. Dictionary of Epidemiology.
  • 28.
  • 29. Assessment of reliability Test-Retest Reliability Equivalence Internal Consistency spss Reliability: Kappa
  • 30. EXAMPLE OF PERCENT AGREEMENT Two physicians are each given a set of 100 X-rays to look at independently and asked to judge whether pneumonia is present or absent. When both sets of diagnoses are tallied, it is found that 95% of the diagnoses are the same.
  • 31. IS PERCENT AGREEMENT GOOD ENOUGH? Do these two physicians exhibit high diagnostic reliability? Can there be 95% agreement between two observers without really having good reliability?
  • 32. Compare the two tables below: Table 2 Table 1 MD#1 MD#1 Yes No Yes No Yes 1 3 Yes 43 3 MD#2 MD#2 No 2 94 No 2 52 In both instances, the physicians agree 95% of the time. Are the two physicians equally reliable in the two tables?
  • 33. USE OF THE KAPPA STATISTIC TO ASSESS RELIABILITY Kappa is a widely used test of inter or intra-observer agreement (or reliability) which corrects for chance agreement.
  • 34. KAPPA VARIES FROM + 1 to - 1 + 1 means that the two observers are perfectly reliable. They classify everyone exactly the same way. 0 means there is no relationship at all between the two observer’s classifications, above the agreement that would be expected by chance. - 1 means the two observers classify exactly the opposite of each other. If one observer says yes, the other always says no.
  • 35. GUIDE TO USE OF KAPPAS IN EPIDEMIOLOGY AND MEDICINE Kappa > .80 is considered excellent Kappa .60 - .80 is considered good Kappa .40 - .60 is considered fair Kappa < .40 is considered poor
  • 36. WAY TO CALCULATE KAPPA 1. Calculate observed agreement (cells in which the observers agree/total cells). In both table 1 and table 2 it is 95% 2. Calculate expected agreement (chance agreement) based on the marginal totals
  • 37. Table 1’s marginal totals are: MD#1 OBSERVED Yes No Yes 1 3 4 MD#2 No 2 94 96 3 97 100
  • 38. OBSERVED MD #1 How do we calculate • the N expected by chance in each cell? Yes No We assume that • MD#2 Yes 1 3 4 each cell should No 2 94 96 reflect the marginal distributions, i.e. the 3 97 100 proportion of yes and no answers EXPECTED MD #1 should be the same within the four-fold table as in the Yes No marginal totals. MD#2 Yes 4 No 96 3 97 100
  • 39. To do this, we find the proportion of answers in either the column (3% and 97%, yes and no respectively for MD #1) or row (4% and 96% yes and no respectively for MD #2) marginal totals, and apply one of the two proportions to the other marginal total. For example, 96% of the row totals are in the “No” category. Therefore, by chance 96% of MD #1’s “No’s” should also be in the “No” column. 96% of 97 is 93.12. MD#1 EXPECTED Yes No MD#2 Yes 4 No 93.12 96 3 97 100
  • 40. By subtraction, all other cells fill in automatically, and each yes/no distribution reflects the marginal distribution. Any cell could have been used to make the calculation, because once one cell is specified in a 2x2 table with fixed marginal distributions, all other cells are also specified. EXPECTED MD #1 Yes No MD#2 Yes 0.12 3.88 4 No 2.88 93.12 96 3 97 100
  • 41. Now you can see that just by the operation of chance, 93.24 of the 100 observations should have been agreed to by the two observers. (93.12 + 0.12) EXPECTED MD #1 Yes No MD#2 Yes 0.12 3.88 4 No 2.88 93.12 96 3 97 100
  • 42. Below is the formula for calculating Kappa from expected agreement Observed agreement - Expected Agreement 1 - Expected Agreement 95% - 93.24% = 1.76% = .26 1 - 93.24% 6.76%
  • 43. How good is a Kappa of 0.26? Kappa > .80 is considered excellent Kappa .60 - .80 is considered good Kappa .40 - .60 is considered fair Kappa < .40 is considered poor
  • 44. In the second example, the observed agreement was also 95%, but the marginal totals were very different ACTUAL MD #1 Yes No MD#2 Yes 46 No 54 45 55 100
  • 45. Using the same procedure as before, we calculate the expected N in any one cell, based on the marginal totals. For example, the lower right cell is 54% of 55, which is 29.7 ACTUAL MD #1 Yes No MD#2 Yes 46 No 29.7 54 45 55 100
  • 46. And, by subtraction the other cells are as below. The cells which indicate agreement are highlighted in yellow, and add up to 50.4% ACTUAL MD #1 Yes No MD#2 Yes 20.7 25.3 46 No 24.3 29.7 54 45 55 100
  • 47. Enter the two agreements into the formula Observed agreement - Expected Agreemen 1 - Expected Agreement 95% - 50.4% = 44.6% = .90 1 - 50.4% 49.6% In this example, the observers have the same % agreement, but now they are much different from chance. Kappa of 0.90 is considered excellent