SlideShare une entreprise Scribd logo
1  sur  30
DEVELOPMENTAL
READING
ASSESSMENT
DRA
Published by Pearson


Reviewed by:
Carolyn Kick
AUTHORS
Joetta Beaver
With a bachelor of science in elementary education and master's degree in
reading from The Ohio State University, Joetta Beaver worked as an
elementary teacher (K-5) for 30 years, as well as a K-5 Language
Arts/Assessment coordinator and an Early Education teacher-leader. She is
the primary author of DRA2 K-3, co-author of DRA2 4-8 and Developing
Writer's Assessment (DWA), consultant, and speaker.
Mark Carter, PhD
With assessment the focus of much of his professional work, Mark Carter
served as a coordinator of assessment for Upper Arlington Schools (where
he currently teachers fifth grade), conducted numerous seminars, and co-
authored DRA2 4-8, DWA, and Portfolio Assessment in the Reading
Classroom. He received his doctorate of philosophy from The Ohio State
University where he also taught graduate courses in education as an
adjunct professor.
OVERVIEW OF THE DRA
 The Developmental Reading Assessment is a set of
  individually administered criterion-referenced assessments
  K-8.
 Purpose: Identify students’ reading level based on
  accuracy, fluency, and comprehension
 Other purposes- Identify students’ strength and
  weaknesses at their independent reading level, planning
  instruction, monitor reading growth, and preparation for
  testing expectations.
 Assessment is administered one-on-one requiring students
  to read specifically selected leveled assessment texts that
  increase with difficulty.
 Administered, scored, and interpreted by classroom
  teachers.
DRA HISTORY & REVISIONS
 1988-1997- DRA is researched and developed by Joetta Beaver and
    the Upper Arlington School District
   1997- DRA K-3 is published by Pearson
   1999- Evaluation of the Development of Reading
   2002- DRA 4-8
   2004- DRA Word Analysis
   2005 SRA Second Edition (DRA2), K-3 & 4-8
   2006- Evaluation of the Development of Reading
   2007- More than 250,000 classrooms use DRA and EDL
   2008- Pearson partners with Liberty Source on DRA2 Handheld
    Tango Edition
   2009- DRA2 Handheld- Tango wins CODIE Award
DRA READING ASSESSMENT
CRITERIA

    Oral Reading
                     Fluency
       Ability



                   Word Analysis
   Comprehension
                     Grade K
ORAL READING AND FLUENCY

The total number of oral reading errors is
 converted to an accuracy score using a
 words-per minute (WCPM) metric.
 Rating expression, phrasing, rate, and
 accuracy on a 4 point scale. This begins
 at level 14- the transitional level, grades
 1 and 2.
COMPREHENSION
 Levels 3-16, once the oral reading is over, the student
  should take the book and read it again silently. This gives
  them another opportunity to check themselves on
  comprehension for retelling. Students retell what happens
  in the story.
 Underline information that the student is able to give, but
  which requires prompting.
 Note information that the student is able to give, but which
  requires prompting, with a TP (teacher prompt).
 Follow-up questions follow the summary and if used need
  to be tallied to the left. The number of prompts to elicit more
  information will be calculated as part of the comprehension
  score.
WORD ANALYSIS

Assesses phonological awareness,
 metalanguage, letter/word recognition,
 phonics, and structural analysis in
 grades K-3. DRA Word Analysis is
 included in the new second edition of
 DRA K-3.
READING LEVELS


 Emergent                Early          Transitional Levels
 Levels A-3           Levels 4-12              14-24
Kindergarten           Grade 1             Grades 1 & 2



                             Intermediate/Middle
        Extending: Levels
                              School Levels 40-
        28-39 Grades 2 & 3
                                80, Grades 4-8
INFORMATION ON DRA
VALIDITY
KORETZ ON VALIDITY
“…Validity, which is the single most important
 criterion for evaluating achievement testing.
 ..but, tests themselves are not valid or invalid.
 Rather, it is an inference based on test scores
 that is valid or invalid. ..Validity is also a
 continuum: inferences are rarely perfect. The
 question to ask is how well supported the
 conclusion is” (Koretz, 2008, p. 31).
VALIDITY CONT.

 Messick 1994 would argue that construct validity refers
  to the inferences that are drawn about score meaning,
  specifically the score interpretation and the
  implications for test use (quantitative). This theoretical
  framework becomes subject to empirical challenges, a
  unified approach of validity
 What is the test measuring?
 Can it measure what it intends to measure?
FOUR TYPES OF VALIDATION


     Predictive             Concurrent
 Criterion-oriented     Criterion-Oriented

                  Validity


     Content                 Construct
CRITERION VALIDITY

Predictive validity is were we draw an
 inference from test scores to performance.
Concurrent Validity- studied when a test is
 proposed a substitute for another, or a test is
 shown to correlate with some contemporary
 criterion (Cronbach & Meehl, 1955).
CONTENT VALIDITY

According to Yu, content validity is when we
  draw
inferences from test scores to a larger domain of
items similar to those on the test, sample
  population.
This selection of content is usually done by
  experts.
Experts tend to lack experience in the field, and
assume that all are experts.
CONSTRUCT VALIDITY
According to Hunter and Schmidt (1990), construct validity is a
quantitative question rather than a qualitative distinction such as
  "valid" or
"invalid"; it is a matter of degree. Construct validity can be measured
   by
the correlation between the intended independent variable
  (construct)
and the proxy independent variable (indicator, sign) that is actually
  used.
                                                           -Yu
PEARSON EDUCATION ON
            DRA VALIDITY
 Pearson refers “…to validity of an assessment,
 one looks at the extent to which the assessment
 actually measures what it is supposed to
 measure.” Questions to be asked when examining
 validity include:
   Does this assessment truly measure reading
    ability?
   Can teachers make accurate inferences about
    the true reading ability of a student based upon
    DRA2 assessment results?
PEARSON EDUCATION ON CONTENT
   RELATED VALIDITY OF THE DRA
 The content validity of a test relates to the adequacy
  with which the content is covered in the test.
 A “Theoretical Framework and Research,” the DRA2
  incorporates reading domains to review and research
  good readers with consultants and educators.
 Content Validity was built into the DRA and DRA2
  assessments during the development process.
PEARSON CRITERION RELATED
        VALIDITY ON THE DRA
 Criterion-related validity refers to the extent to which a
  measure predicts performance on some other significant
  measures, (called a criterion) other than the test itself.
  Criterion validity may be broken down into two components:
  concurrent and predictive.
 Concurrent validity correlates the DRA to many other
  reading tests:
    Gray’s Oral Reading Test-4th Edition
    GORT-4; Wiederholt & Bryant, 2001
    DIBELS Oral Reading Fluency Test-6th Edition
    Correlations Between DRA2 and Teacher Ratings
DRA REVIEW,
       NATALIE RATHVON, PH. D.
The following evidence of validation is based
upon the review of the DRA completed by:

Natalie Rathvon, Ph. D., Assistant Clinical
Professor, George Washington University,
Washington DC, Private Practice Psychologist
and School Consultant, Bethesda, MD
(August 2006)
DRA CONTENT VALIDITY
In a review by Natalie Rathvon, PH.D.
 Oral Fluency, running record- derived from only Clay’s Observational Survey (Clay,
  1993).
 Teacher surveys (return rates were 46%), conducted (ns of 80 to 175) revealed
  that DRA provided teachers with information describing reading behaviors and
  identifying instructional goals
 There were also concerns about adequacy and accuracy of the comprehension
  assessment and the accuracy of text leveling prior to 2003 before the Lexile
  framework evaluated the readability in the DRA text.
 Concerns about who develop and reviewed the assessment. There is no evidence
  that external reviewers participated in the development, revision, or validation
  process.
 Rathvon states, “Means, standard deviations, and standard errors of
  measurement should be presented for accuracy, rate, and comprehension scores
  for field test students reading adjacent text levels to document level-to-level
  progression.”
CONSTRUCT VALIDITY EVIDENCE
 Results from Louisiana statewide DRA administrations for spring
  of 2000 through 2002 for students in grades 1 through 3 (ns =
  4,162 to 74,761) show an increase in DRA levels across grades,
  as well as changes in DRA level for a matched sample of student
  (n = 32.739) over a three year period. This indicates that the
  skills being measured are developmental.
 The DRA as evidence can detect changes in reading levels.
 As evidenced in two studies evaluating the relationship between
  Lexile Scale measures and DRA running-record format is a valid
  method of assessing reading comprehension.
SUMMARY OF WHAT DRA IS:
 An attractive reading battery modeled after an informal
  reading inventory based Clay’s Observational Survey
  (Clay, 1993)
 Authentic Texts
 Instructionally relevant measures of fluency and
  comprehension
 Provides meaningful results for classroom teachers,
  parents, and other stakeholders
 Provides encouraging evidence that the use of DRA
  predicts future reading achievement for primary grade
  students.
DRA CRITERION RELATED
              VALIDITY
 No concurrent validity evidence is presented documenting the
  relationship between the DRA and standardized or criterion-
  referenced tests of reading, vocabulary, language, or other
  relevant domains for students in kindergarten or grades 4 through
  8.
 There is a need for studies examining the extent to which
  individual students obtain identical performance levels on the
  DRA and validated reading measures are especially needed.
 No information is provided to document the relationship between
  the DRA Word AnalysNo concurrent validity evidence is presented
  for any of the DRA assessments in terms of relationship between
  DRA performance and contextually relevant performance
  measures, such as teacher ratings of student achievement or
  classroom grades.
SUMMARY OF WHAT DRA IS:
 Responsive to intervention for primary grade students
 An assessment model that has raised teacher
  awareness of student reading levels corresponding
  them with appropriate texts.
 Teacher reviewed and survey based on classroom
  practice (return rates were 46%), conducted (ns of 80
  to 175) (Rathvon, 2006)
 Provided evidence that the Lexile Scale measures and
  DRA running record format is a valid method of
  assessing reading comprehension.
SUMMARY OF WHAT DRA IS NOT:
 Informal reading inventories lack in reliability and validity (Invenizzi et
    al,; Spector, 2005)
   Provide evidence of text equivalence within levels
   Provide evidence for overall reading level for half the grade levels.
   Have consistent process of text selection, scoring, and
    administering- vulnerable to teacher inconsistencies and judgments
    (improved since Lexile model)
   Provide enough evidence of criterion-related validity for older
    students
   Provide concurrent validity evidence documenting the relationship
    between the DRA and standardized or criterion-referenced tests of
    reading, vocabulary, or language in kindergarten and grades 4-8
   Provided to document the relationship between the DRA Word
    Analysis and any criterion measure.
SUMMARY OF WHAT DRA IS NOT:
 Provide sufficient evidence that teachers can select texts aligned
    with students’ actual reading level (or achieve acceptable levels of
    scorer consistency and accuracy)
   Provide evidence of demographic groups
   Include external reviewers in the development, revision, and
    validitaion of any DRA series
   Provide complete field testing reporting
   Provide theoretical rationale or empirical data supporting the
    omission of a standard task to estimate student reading level.
   Provide standard means , standard deviations, and standard
    errors of measurement ensuring accuracy
WHAT DOES ALL OF THIS MEAN?
Learning about the validity of the Developmental Reading
Assessment was difficult. I have yet to administer one, but
would like to go through the process. There is no empirical
evidence that consistently supports the validity of the DRA.
There are far too many variables, and opportunities for human
behavior to alter results and effect the variability.
However, the difference in how teachers approach the
diagnostics of the reading levels of students, and the
awareness of getting leveled texts in the classroom has
changed dramatically over the past few years. The changes
in reading instruction based on results of the DRA (though not
valid) has changed reading instruction in our district.
RESOURCES
 http://mypearsontraining.com/pdfs/TG_DRA2_ProgramComponen
  ts.pdf
 DRA k-3
  PMDBSUBCATEGORYID=&PMDBSITEID=2781&PMDBSUBSO
  LUTIONID=&PMDBSOLUTIONID=&PMDBSUBJECTAREAID=&P
  MDBCATEGORYID=&PMDbProgramID=23662

Contenu connexe

Tendances

Ccss smarter balanced_assessments
Ccss smarter balanced_assessmentsCcss smarter balanced_assessments
Ccss smarter balanced_assessments
ssmith33373
 
Chapter 3(designing classroom language tests)
Chapter 3(designing classroom language tests)Chapter 3(designing classroom language tests)
Chapter 3(designing classroom language tests)
Kheang Sokheng
 
Objective test edu4
Objective test edu4Objective test edu4
Objective test edu4
junglestorm
 
Designing classroom language tests
Designing classroom language testsDesigning classroom language tests
Designing classroom language tests
Sutrisno Evenddy
 
Tabe level l training ppt (fy11 final)[1]
Tabe level l training ppt (fy11 final)[1]Tabe level l training ppt (fy11 final)[1]
Tabe level l training ppt (fy11 final)[1]
charlie_herbert
 

Tendances (20)

Ccss smarter balanced_assessments
Ccss smarter balanced_assessmentsCcss smarter balanced_assessments
Ccss smarter balanced_assessments
 
Chapter 3(designing classroom language tests)
Chapter 3(designing classroom language tests)Chapter 3(designing classroom language tests)
Chapter 3(designing classroom language tests)
 
Subjective test
Subjective testSubjective test
Subjective test
 
Objective test edu4
Objective test edu4Objective test edu4
Objective test edu4
 
Essays
EssaysEssays
Essays
 
Essay Type Test
Essay Type TestEssay Type Test
Essay Type Test
 
Reading assessment
Reading assessmentReading assessment
Reading assessment
 
B.ed 1st-soc
B.ed 1st-socB.ed 1st-soc
B.ed 1st-soc
 
essat type question
essat type question essat type question
essat type question
 
Session 1
Session 1 Session 1
Session 1
 
Constructing subjective test items
Constructing  subjective test itemsConstructing  subjective test items
Constructing subjective test items
 
Torc 3 final
Torc 3 finalTorc 3 final
Torc 3 final
 
Multiple choice-questions
Multiple choice-questionsMultiple choice-questions
Multiple choice-questions
 
Objective Test Type
Objective Test TypeObjective Test Type
Objective Test Type
 
Reading success lesson sampler mh eonline.com
Reading success lesson sampler   mh eonline.comReading success lesson sampler   mh eonline.com
Reading success lesson sampler mh eonline.com
 
Chapter 4 testing aima
Chapter 4 testing aimaChapter 4 testing aima
Chapter 4 testing aima
 
Designing classroom language tests
Designing classroom language testsDesigning classroom language tests
Designing classroom language tests
 
Objective type tests items - Merits and Demerits || merits and Demerits of ob...
Objective type tests items - Merits and Demerits || merits and Demerits of ob...Objective type tests items - Merits and Demerits || merits and Demerits of ob...
Objective type tests items - Merits and Demerits || merits and Demerits of ob...
 
Classroom Language Test
Classroom Language TestClassroom Language Test
Classroom Language Test
 
Tabe level l training ppt (fy11 final)[1]
Tabe level l training ppt (fy11 final)[1]Tabe level l training ppt (fy11 final)[1]
Tabe level l training ppt (fy11 final)[1]
 

Similaire à Validity ppt1

Learning_activity1_Tirado.Correa_Geovanna.Elizabeth.pptx
Learning_activity1_Tirado.Correa_Geovanna.Elizabeth.pptxLearning_activity1_Tirado.Correa_Geovanna.Elizabeth.pptx
Learning_activity1_Tirado.Correa_Geovanna.Elizabeth.pptx
getirado
 
Organizing and Evaluating Results from Multiple Reading Assessments
Organizing and Evaluating Results from Multiple Reading AssessmentsOrganizing and Evaluating Results from Multiple Reading Assessments
Organizing and Evaluating Results from Multiple Reading Assessments
rathx039
 

Similaire à Validity ppt1 (20)

207 TMA-2_ManotaR,FusileroM,DumoM
207 TMA-2_ManotaR,FusileroM,DumoM207 TMA-2_ManotaR,FusileroM,DumoM
207 TMA-2_ManotaR,FusileroM,DumoM
 
A1. Cajamarca.Patricio.Assessment.pptx
A1. Cajamarca.Patricio.Assessment.pptxA1. Cajamarca.Patricio.Assessment.pptx
A1. Cajamarca.Patricio.Assessment.pptx
 
Understanding reliability and validity
Understanding reliability and validityUnderstanding reliability and validity
Understanding reliability and validity
 
Understanding Reliability and Validity.pptx
Understanding Reliability and Validity.pptxUnderstanding Reliability and Validity.pptx
Understanding Reliability and Validity.pptx
 
Understanding Reliability and Validity
Understanding Reliability and ValidityUnderstanding Reliability and Validity
Understanding Reliability and Validity
 
Learning_activity1_Tirado.Correa_Geovanna.Elizabeth.pptx
Learning_activity1_Tirado.Correa_Geovanna.Elizabeth.pptxLearning_activity1_Tirado.Correa_Geovanna.Elizabeth.pptx
Learning_activity1_Tirado.Correa_Geovanna.Elizabeth.pptx
 
Validity
ValidityValidity
Validity
 
A Comparison Of The Performance Of Analytic Vs. Holistic Scoring Rubrics To A...
A Comparison Of The Performance Of Analytic Vs. Holistic Scoring Rubrics To A...A Comparison Of The Performance Of Analytic Vs. Holistic Scoring Rubrics To A...
A Comparison Of The Performance Of Analytic Vs. Holistic Scoring Rubrics To A...
 
Pilot Study for Validity and Reliability of an Aptitude Test
Pilot Study for Validity and Reliability of an Aptitude TestPilot Study for Validity and Reliability of an Aptitude Test
Pilot Study for Validity and Reliability of an Aptitude Test
 
A1.Pombo.Jurado.Jose.Assessment.nrc.18234.pptx
A1.Pombo.Jurado.Jose.Assessment.nrc.18234.pptxA1.Pombo.Jurado.Jose.Assessment.nrc.18234.pptx
A1.Pombo.Jurado.Jose.Assessment.nrc.18234.pptx
 
Learning_activity1_Abarca_Garzon_Scarlet_Paulette.pptx
Learning_activity1_Abarca_Garzon_Scarlet_Paulette.pptxLearning_activity1_Abarca_Garzon_Scarlet_Paulette.pptx
Learning_activity1_Abarca_Garzon_Scarlet_Paulette.pptx
 
Journal raganit santos
Journal raganit santosJournal raganit santos
Journal raganit santos
 
Assessing Writing
Assessing WritingAssessing Writing
Assessing Writing
 
Organizing and Evaluating Results from Multiple Reading Assessments
Organizing and Evaluating Results from Multiple Reading AssessmentsOrganizing and Evaluating Results from Multiple Reading Assessments
Organizing and Evaluating Results from Multiple Reading Assessments
 
ASSESSMENT CONCEPTS AND ISSUES
ASSESSMENT CONCEPTS AND ISSUESASSESSMENT CONCEPTS AND ISSUES
ASSESSMENT CONCEPTS AND ISSUES
 
TEST DEVELOPMENT AND EVALUATION (6462)
TEST DEVELOPMENT AND EVALUATION (6462)TEST DEVELOPMENT AND EVALUATION (6462)
TEST DEVELOPMENT AND EVALUATION (6462)
 
The assessment of deep word knowledge in young learners
The assessment of deep word knowledge in young learnersThe assessment of deep word knowledge in young learners
The assessment of deep word knowledge in young learners
 
Overview of Assessment Ch. 3
Overview of Assessment Ch. 3Overview of Assessment Ch. 3
Overview of Assessment Ch. 3
 
CLASSROOM ACTIVITIES
CLASSROOM  ACTIVITIESCLASSROOM  ACTIVITIES
CLASSROOM ACTIVITIES
 
Assessment Criteria In EFL Writing Skills
Assessment Criteria In EFL Writing SkillsAssessment Criteria In EFL Writing Skills
Assessment Criteria In EFL Writing Skills
 

Dernier

1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
Chris Hunter
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
heathfieldcps1
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 

Dernier (20)

General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Role Of Transgenic Animal In Target Validation-1.pptx
Role Of Transgenic Animal In Target Validation-1.pptxRole Of Transgenic Animal In Target Validation-1.pptx
Role Of Transgenic Animal In Target Validation-1.pptx
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural ResourcesEnergy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
 
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-IIFood Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 

Validity ppt1

  • 2. AUTHORS Joetta Beaver With a bachelor of science in elementary education and master's degree in reading from The Ohio State University, Joetta Beaver worked as an elementary teacher (K-5) for 30 years, as well as a K-5 Language Arts/Assessment coordinator and an Early Education teacher-leader. She is the primary author of DRA2 K-3, co-author of DRA2 4-8 and Developing Writer's Assessment (DWA), consultant, and speaker. Mark Carter, PhD With assessment the focus of much of his professional work, Mark Carter served as a coordinator of assessment for Upper Arlington Schools (where he currently teachers fifth grade), conducted numerous seminars, and co- authored DRA2 4-8, DWA, and Portfolio Assessment in the Reading Classroom. He received his doctorate of philosophy from The Ohio State University where he also taught graduate courses in education as an adjunct professor.
  • 3. OVERVIEW OF THE DRA  The Developmental Reading Assessment is a set of individually administered criterion-referenced assessments K-8.  Purpose: Identify students’ reading level based on accuracy, fluency, and comprehension  Other purposes- Identify students’ strength and weaknesses at their independent reading level, planning instruction, monitor reading growth, and preparation for testing expectations.  Assessment is administered one-on-one requiring students to read specifically selected leveled assessment texts that increase with difficulty.  Administered, scored, and interpreted by classroom teachers.
  • 4. DRA HISTORY & REVISIONS  1988-1997- DRA is researched and developed by Joetta Beaver and the Upper Arlington School District  1997- DRA K-3 is published by Pearson  1999- Evaluation of the Development of Reading  2002- DRA 4-8  2004- DRA Word Analysis  2005 SRA Second Edition (DRA2), K-3 & 4-8  2006- Evaluation of the Development of Reading  2007- More than 250,000 classrooms use DRA and EDL  2008- Pearson partners with Liberty Source on DRA2 Handheld Tango Edition  2009- DRA2 Handheld- Tango wins CODIE Award
  • 5. DRA READING ASSESSMENT CRITERIA Oral Reading Fluency Ability Word Analysis Comprehension Grade K
  • 6. ORAL READING AND FLUENCY The total number of oral reading errors is converted to an accuracy score using a words-per minute (WCPM) metric. Rating expression, phrasing, rate, and accuracy on a 4 point scale. This begins at level 14- the transitional level, grades 1 and 2.
  • 7. COMPREHENSION  Levels 3-16, once the oral reading is over, the student should take the book and read it again silently. This gives them another opportunity to check themselves on comprehension for retelling. Students retell what happens in the story.  Underline information that the student is able to give, but which requires prompting.  Note information that the student is able to give, but which requires prompting, with a TP (teacher prompt).  Follow-up questions follow the summary and if used need to be tallied to the left. The number of prompts to elicit more information will be calculated as part of the comprehension score.
  • 8. WORD ANALYSIS Assesses phonological awareness, metalanguage, letter/word recognition, phonics, and structural analysis in grades K-3. DRA Word Analysis is included in the new second edition of DRA K-3.
  • 9. READING LEVELS Emergent Early Transitional Levels Levels A-3 Levels 4-12 14-24 Kindergarten Grade 1 Grades 1 & 2 Intermediate/Middle Extending: Levels School Levels 40- 28-39 Grades 2 & 3 80, Grades 4-8
  • 12. KORETZ ON VALIDITY “…Validity, which is the single most important criterion for evaluating achievement testing. ..but, tests themselves are not valid or invalid. Rather, it is an inference based on test scores that is valid or invalid. ..Validity is also a continuum: inferences are rarely perfect. The question to ask is how well supported the conclusion is” (Koretz, 2008, p. 31).
  • 13. VALIDITY CONT.  Messick 1994 would argue that construct validity refers to the inferences that are drawn about score meaning, specifically the score interpretation and the implications for test use (quantitative). This theoretical framework becomes subject to empirical challenges, a unified approach of validity  What is the test measuring?  Can it measure what it intends to measure?
  • 14. FOUR TYPES OF VALIDATION Predictive Concurrent Criterion-oriented Criterion-Oriented Validity Content Construct
  • 15. CRITERION VALIDITY Predictive validity is were we draw an inference from test scores to performance. Concurrent Validity- studied when a test is proposed a substitute for another, or a test is shown to correlate with some contemporary criterion (Cronbach & Meehl, 1955).
  • 16. CONTENT VALIDITY According to Yu, content validity is when we draw inferences from test scores to a larger domain of items similar to those on the test, sample population. This selection of content is usually done by experts. Experts tend to lack experience in the field, and assume that all are experts.
  • 17. CONSTRUCT VALIDITY According to Hunter and Schmidt (1990), construct validity is a quantitative question rather than a qualitative distinction such as "valid" or "invalid"; it is a matter of degree. Construct validity can be measured by the correlation between the intended independent variable (construct) and the proxy independent variable (indicator, sign) that is actually used. -Yu
  • 18. PEARSON EDUCATION ON DRA VALIDITY  Pearson refers “…to validity of an assessment, one looks at the extent to which the assessment actually measures what it is supposed to measure.” Questions to be asked when examining validity include:  Does this assessment truly measure reading ability?  Can teachers make accurate inferences about the true reading ability of a student based upon DRA2 assessment results?
  • 19. PEARSON EDUCATION ON CONTENT RELATED VALIDITY OF THE DRA  The content validity of a test relates to the adequacy with which the content is covered in the test.  A “Theoretical Framework and Research,” the DRA2 incorporates reading domains to review and research good readers with consultants and educators.  Content Validity was built into the DRA and DRA2 assessments during the development process.
  • 20. PEARSON CRITERION RELATED VALIDITY ON THE DRA  Criterion-related validity refers to the extent to which a measure predicts performance on some other significant measures, (called a criterion) other than the test itself. Criterion validity may be broken down into two components: concurrent and predictive.  Concurrent validity correlates the DRA to many other reading tests:  Gray’s Oral Reading Test-4th Edition  GORT-4; Wiederholt & Bryant, 2001  DIBELS Oral Reading Fluency Test-6th Edition  Correlations Between DRA2 and Teacher Ratings
  • 21. DRA REVIEW, NATALIE RATHVON, PH. D. The following evidence of validation is based upon the review of the DRA completed by: Natalie Rathvon, Ph. D., Assistant Clinical Professor, George Washington University, Washington DC, Private Practice Psychologist and School Consultant, Bethesda, MD (August 2006)
  • 22. DRA CONTENT VALIDITY In a review by Natalie Rathvon, PH.D.  Oral Fluency, running record- derived from only Clay’s Observational Survey (Clay, 1993).  Teacher surveys (return rates were 46%), conducted (ns of 80 to 175) revealed that DRA provided teachers with information describing reading behaviors and identifying instructional goals  There were also concerns about adequacy and accuracy of the comprehension assessment and the accuracy of text leveling prior to 2003 before the Lexile framework evaluated the readability in the DRA text.  Concerns about who develop and reviewed the assessment. There is no evidence that external reviewers participated in the development, revision, or validation process.  Rathvon states, “Means, standard deviations, and standard errors of measurement should be presented for accuracy, rate, and comprehension scores for field test students reading adjacent text levels to document level-to-level progression.”
  • 23. CONSTRUCT VALIDITY EVIDENCE  Results from Louisiana statewide DRA administrations for spring of 2000 through 2002 for students in grades 1 through 3 (ns = 4,162 to 74,761) show an increase in DRA levels across grades, as well as changes in DRA level for a matched sample of student (n = 32.739) over a three year period. This indicates that the skills being measured are developmental.  The DRA as evidence can detect changes in reading levels.  As evidenced in two studies evaluating the relationship between Lexile Scale measures and DRA running-record format is a valid method of assessing reading comprehension.
  • 24. SUMMARY OF WHAT DRA IS:  An attractive reading battery modeled after an informal reading inventory based Clay’s Observational Survey (Clay, 1993)  Authentic Texts  Instructionally relevant measures of fluency and comprehension  Provides meaningful results for classroom teachers, parents, and other stakeholders  Provides encouraging evidence that the use of DRA predicts future reading achievement for primary grade students.
  • 25. DRA CRITERION RELATED VALIDITY  No concurrent validity evidence is presented documenting the relationship between the DRA and standardized or criterion- referenced tests of reading, vocabulary, language, or other relevant domains for students in kindergarten or grades 4 through 8.  There is a need for studies examining the extent to which individual students obtain identical performance levels on the DRA and validated reading measures are especially needed.  No information is provided to document the relationship between the DRA Word AnalysNo concurrent validity evidence is presented for any of the DRA assessments in terms of relationship between DRA performance and contextually relevant performance measures, such as teacher ratings of student achievement or classroom grades.
  • 26. SUMMARY OF WHAT DRA IS:  Responsive to intervention for primary grade students  An assessment model that has raised teacher awareness of student reading levels corresponding them with appropriate texts.  Teacher reviewed and survey based on classroom practice (return rates were 46%), conducted (ns of 80 to 175) (Rathvon, 2006)  Provided evidence that the Lexile Scale measures and DRA running record format is a valid method of assessing reading comprehension.
  • 27. SUMMARY OF WHAT DRA IS NOT:  Informal reading inventories lack in reliability and validity (Invenizzi et al,; Spector, 2005)  Provide evidence of text equivalence within levels  Provide evidence for overall reading level for half the grade levels.  Have consistent process of text selection, scoring, and administering- vulnerable to teacher inconsistencies and judgments (improved since Lexile model)  Provide enough evidence of criterion-related validity for older students  Provide concurrent validity evidence documenting the relationship between the DRA and standardized or criterion-referenced tests of reading, vocabulary, or language in kindergarten and grades 4-8  Provided to document the relationship between the DRA Word Analysis and any criterion measure.
  • 28. SUMMARY OF WHAT DRA IS NOT:  Provide sufficient evidence that teachers can select texts aligned with students’ actual reading level (or achieve acceptable levels of scorer consistency and accuracy)  Provide evidence of demographic groups  Include external reviewers in the development, revision, and validitaion of any DRA series  Provide complete field testing reporting  Provide theoretical rationale or empirical data supporting the omission of a standard task to estimate student reading level.  Provide standard means , standard deviations, and standard errors of measurement ensuring accuracy
  • 29. WHAT DOES ALL OF THIS MEAN? Learning about the validity of the Developmental Reading Assessment was difficult. I have yet to administer one, but would like to go through the process. There is no empirical evidence that consistently supports the validity of the DRA. There are far too many variables, and opportunities for human behavior to alter results and effect the variability. However, the difference in how teachers approach the diagnostics of the reading levels of students, and the awareness of getting leveled texts in the classroom has changed dramatically over the past few years. The changes in reading instruction based on results of the DRA (though not valid) has changed reading instruction in our district.
  • 30. RESOURCES  http://mypearsontraining.com/pdfs/TG_DRA2_ProgramComponen ts.pdf  DRA k-3 PMDBSUBCATEGORYID=&PMDBSITEID=2781&PMDBSUBSO LUTIONID=&PMDBSOLUTIONID=&PMDBSUBJECTAREAID=&P MDBCATEGORYID=&PMDbProgramID=23662