SlideShare une entreprise Scribd logo
1  sur  22
Survey ReductionTechniques CARMA Internet Research Module Jeffrey Stanton
Primary Goal: Reduce Administration Time Secondary goals Reduce perceived administration time Increase the engagement of the respondent with the experience of completing instrument  lock in interest and excitement from the start Reduce the extent of missing and erroneous data due to carelessness, rushing, test forms that are hard to use, etc. Increase the respondents’ ease of experience (maybe even enjoyment!) so that they will persist to the end AND that they will respond again next year (or whenever the next survey comes out) Conclusions? Make the survey SEEM as short and compact as possible Streamline the WHOLE EXPERIENCE from the first call for participation all the way to the end of the final page of the instrument Focus test-reduction efforts on the easy stuff before diving into the nitty-gritty statistical stuff 2
3 Please choose the option that most closely fits how youdescribe yourself. Please select only one of the two options: Female []    Male []
Instruction Reduction Fewer than 4% of respondents make use of printed instructions:Novick and Ward (2006, ACM-SIGDOC) Comprehension of instructions only influences novice performance on surveys: Catrambone(1990; HCI) Instructions on average are written five grade levels above average grade level of respondent; 23% of respondents failed to understand at least one element of instructions: Spandorferet al. (1993; Annals of EM) Unless you are working with a special/unusual population, you can assume that respondents know how to complete Likert scales and other common response formats without instructions Most people don’t read instructions anyway.  When they do, the instructions often don’t help them respond any better! If your response format is so novel that people require instructions, then you have a substantial burden to pilot test, in order to ensure that people comprehend the instructions and respond appropriately. Otherwise, do not take the risk! 4
Archival Demographics ,[object Object],gender, race/ethnicity, age managers and non-managers exempt and non-exempt part time and full time unit and departmental affiliations ,[object Object]
Self-completed demographic data frequently containing missing fields or intentional mistakes5
Archival Demographics ,[object Object]
Respondents should feel like demographics are not serving to identify them in their survey responses.
You could offer respondents two choices: match (or automatically fill in) some/all demographic data using the code number provided in your invitation email (or on a paper letter);  they fill in the demographic data (on web-based surveys, a reveal can branch respondents to the demographics page) 6
Eligibility If a survey has eligibility requirements, the screening questions should be placed at the earliest possible point in the survey. (eligibility requirements can appear in instructions, but this should not be the sole method of screening out ineligible respondents) Skip Logic Skip logic actually shortens the survey by setting aside questions for which the respondent is ineligible. Branching Branching may not shorten, but can improve the user experience by offering questions specifically focused to the respondent’s demographic or reported experience. 7 Illustration credit: Vovici.com Eligibility, Skip Logic, and Branching
Ever answer a survey where you knew that your answer would predict how many questions you would have to answer after that? e.g., “How many hotel chains have you been to in the last year?” If users can predict that their eligibility, the survey skip logic, or survey branching will lead to longer responses, more complex responses, or more difficult or tedious responses, they may: Abandon the survey Backup and change their answer to the conditional with less work (if the interface permits it). Branch design should  try not to imply what the user would have experienced in another branch. Paths through the survey should avoid causing considerably more work for some respondents than for others. 8 Implications: Eligibility, Skip Logic, and Branching
Panel Designs and/or Multiple Administration Panel designs measure the same respondents on multiple occasions. Typically either  predictors are gathered at an early point in time, and outcomes gathered at a later point in time, or  both predictors and outcomes are measured at every time point.   (There are variations on these two themes). Panel designs are based on maturation and/or intervention processes that require the passage of time.  Examples: career aspirations over time, person-organization fit over time, training before/after Minimally, panel designs can help mitigate (though not solve) the problem of common method bias; e.g., responding to a criterion at time 2, respondents tend to forget how they responded at time 1. 9
Panel Designs and/or Multiple Administration Survey designers can apply the logic of panel designs to their own surveys: Sometimes, you have to collect a large number of variables (no measure shortening), and it is impractical to do so in a single administration. Generally speaking: Better to have a many short, pleasant survey administrations with a cumulative “work time lost” of an hour vs. long and grinding one hour-long survey. The former can get you happier and less fatigued respondents and better data, hopefully. In the limit, consider the implications of a “Today’s Poll” approach to measuring climate, stress, satisfaction, or other attitudinal variables: One question per day, every day…. 10
Unobtrusive Behavioral Observation Surveys appear convenient and relatively inexpensive in and of themselves…however, the cumulative work time lost across all respondents may be quite large.  Methods that assess social variables through observations of overt behavior rather than self report can provide indications of stress, satisfaction, organizational citizenship, intent to quit, and other psychologically and organizationally relevant variables. Examples  Cigarette breaks over time (frequency, # of incumbents per day);  Garbage (weight of trash before/after a recycling program);  Social media usage (tweets, blog posts, Facebook);  Wear of floor tiles Absenteeism or tardiness records;  Incumbent, team and department production quality and quantity measures 11
Unobtrusive Behavioral Observation Most unobtrusive observations must be conducted over time: Establish a baseline for the behavior. Examine subsequent time periods to examine changes/trends over time.  Generally, much more labor intensive data collection than surveys. Results should be cross-validated with other types of evidence. 12
Scale Reduction and One-item Measures Standard scale construction calls for “sampling the construct domain” with items that tap into different aspects of the construct with items that refer to various content areas. Scales with more items can include a larger sample of the behaviors or topics relevant to the construct.  13 RELEVANT measuring what you want measure Construct Domain Item Content CONTAMINATED measuring what you don’t want to measure DEFICIENT not measuring what you want to measure
Scale Reduction and One-item Measures When fewer items are used, by necessity they must be either more general in wording to obtain full coverage (hopefully) more narrow to focus on a subset of behaviors/topics Internal consistency reliability reinforces this trade-off: As the number of items gets smaller, inter-item correlation must rise to maintain a given level of internal consistency.  However, scales with fewer than 3-5 items rarely achieve acceptable internal consistency without simply becoming alternative wordings of the same questions. Discussion: How many of you have taken a measure where you were being asked the same question again and again?  Your reactions?  Why was this done? The one-item solution: A one-item measure usually “covers” a construct only if is highly non-specific. A one item measure has a measurable reliability (see Wanous & Hudy; ORM, 2001), but the concept of internal consistency is meaningless. Discuss: A one-item knowledge measure vs. a one-item job satisfaction measure. 14
One-item Measure Literature Research using single item measures of each of the five JDI job satisfaction facets and found correlations between .60 and .72 to the full length versions of the JDI scalesNagy (2002) Review of single-item graphical representation scales; so called “faces” scales Patrician (2004) Single item graphic scale for organizational identificationShamir & Kark (2004) Research finding that single item job satisfaction scales systematically overestimate workers’ job satisfactionOshagbemi(1999) Single item measures work best on “homogeneous” constructsLoo (2002) 15
Scale Reduction:Technical Considerations Items can be struck from a scale based on three different sets of qualities:  1. Internal item qualities refer to properties of items that can be assessed in reference to other items on the scale or the scale's summated scores.  2. External item qualities refer to connections between the scale (or its individual items) and other constructs or indicators.  3. Judgmental item qualities refer to those issues that require subjective judgment and/or are difficult to assess in isolation of the context in which the scale is administered  Literature review suggests that the most widely used method for item selection in scale reduction is some form of internal consistency maximization   Corrected item-total correlations provide diagnostic information about internal consistency. In scale reduction efforts, item-total correlations have been employed as a basis for retaining items for a shortened scale version  Factor analysis is another technique that, when used for scale reduction, can lead to increased internal consistency, assuming one chooses items that load strongly on a dominant factor 16
Scale Reduction II Despite their prevalence, there are important limitations of scale reduction techniques that maximize internal consistency.  Choosing items to maximize internal consistency leads to item sets highly redundant in appearance, narrow in content, and potentially low in validity  High internal consistency often signifies a failure to adequately sample content from all parts of the construct domain  To obtain high values of coefficient alpha, a scale developer need only write a set of items that paraphrase each other or are antonyms of one other. One can expect an equivalent result (i.e., high redundancy) from using the analogous approach in scale reduction, that is, excluding all items but those highly similar in content. 17
Scale Reduction III IRT provides an alternative strategy for scale reduction that does not focus on maximizing internal consistency.  One should retain items that are highly discriminating (i.e., moderate to large values of a) and one should attempt to include items with a range of item thresholds (i.e., b) that adequately cover the expected range of the trait in measured individuals  IRT analysis for scale reduction can be complex and does not provide a definitive answer to the question of which items to retain; rather, it provides evidence for which items might work well together to cover the trait range Relating items to external criteria provides a viable alternative to internal consistency and other internal qualities  Because correlations vary across different samples, instruments, and administration contexts, an item that predicts an external criterion best in one sample may not do so in another.   Choosing items to maximize a relation with an external criterion runs the risk of a decrease in discriminant validity between the measures of the two constructs. 18
Scale Reduction IV The overarching goal of any scale reduction project should be to closely replicate the pattern of relations established within the construct's nomologicalnetwork.   In evaluating any given item's relations with external criteria, one should seek moderate correlations with a variety of related scales (i.e., convergent validity) and low correlations with a variety of unrelated measures Researchers may also need to examine other criteria beyond statistical relations to determine which items should remain in an abbreviated scale. Clarity of expression, its relevance to a particular respondent population, the semantic redundancy of an item's content with other items, the perceived invasiveness of an item, and an item's "face" validity. Items lacking apparent relevance, or that are highly redundant with other items on the scale, may be viewed negatively by respondents. To the extent that judgmental qualities can be used to select items with face validity, both the reactions of constituencies and the motivation of respondents maybe enhanced Simple strategy for retention that does not require IRT analysis: Stepwise regression  Rank ordered item inclusion in an "optimal" reduced-length scale that accounts for a nearly maximal proportion of variance in its own full-length summated scale score.   Order of entry into the stepwise regression is a rank order proxy indicating item goodness  Empirical results show that this method performs as well as a brute force combinatorial scan of item combinations; method can also be combined with human judgment to pick items from among the top ranked items (but not in strict ranking order) 19

Contenu connexe

Tendances

Case study method
Case study methodCase study method
Case study method
Balogun53
 
How To Conduct Survey 209
How To Conduct Survey 209How To Conduct Survey 209
How To Conduct Survey 209
swati18
 
Tools campus workshop 17.3.11 bapp wbs3835 qual r
Tools campus workshop 17.3.11 bapp wbs3835 qual rTools campus workshop 17.3.11 bapp wbs3835 qual r
Tools campus workshop 17.3.11 bapp wbs3835 qual r
Paula Nottingham
 
Probsolv2007 engineering design processes pp ws
Probsolv2007   engineering design processes pp wsProbsolv2007   engineering design processes pp ws
Probsolv2007 engineering design processes pp ws
videoteacher
 

Tendances (19)

Questionnaire design & basic of survey
Questionnaire design & basic of surveyQuestionnaire design & basic of survey
Questionnaire design & basic of survey
 
Qualitative research technique
Qualitative research techniqueQualitative research technique
Qualitative research technique
 
Surveys
SurveysSurveys
Surveys
 
Case study method
Case study methodCase study method
Case study method
 
How To Conduct Survey 209
How To Conduct Survey 209How To Conduct Survey 209
How To Conduct Survey 209
 
Designing of Questionnaire
Designing of QuestionnaireDesigning of Questionnaire
Designing of Questionnaire
 
Requirement elicitation technique “one on one interview“
Requirement elicitation technique “one on one interview“Requirement elicitation technique “one on one interview“
Requirement elicitation technique “one on one interview“
 
Fact finding
Fact findingFact finding
Fact finding
 
Fact finding techniques
Fact finding techniquesFact finding techniques
Fact finding techniques
 
Step Up Your Survey Research - Dawn of the Data Age Lecture Series
Step Up Your Survey Research - Dawn of the Data Age Lecture SeriesStep Up Your Survey Research - Dawn of the Data Age Lecture Series
Step Up Your Survey Research - Dawn of the Data Age Lecture Series
 
Chp12 - Research Methods for Business By Authors Uma Sekaran and Roger Bougie
Chp12  - Research Methods for Business By Authors Uma Sekaran and Roger BougieChp12  - Research Methods for Business By Authors Uma Sekaran and Roger Bougie
Chp12 - Research Methods for Business By Authors Uma Sekaran and Roger Bougie
 
Evidence-based decision-making in organizations: Why we need it and why some...
Evidence-based decision-making in organizations:  Why we need it and why some...Evidence-based decision-making in organizations:  Why we need it and why some...
Evidence-based decision-making in organizations: Why we need it and why some...
 
Primary Research Basics: Inforum
Primary Research Basics: InforumPrimary Research Basics: Inforum
Primary Research Basics: Inforum
 
Tools campus workshop 17.3.11 bapp wbs3835 qual r
Tools campus workshop 17.3.11 bapp wbs3835 qual rTools campus workshop 17.3.11 bapp wbs3835 qual r
Tools campus workshop 17.3.11 bapp wbs3835 qual r
 
Primary research methods presentation.
Primary research methods presentation.Primary research methods presentation.
Primary research methods presentation.
 
Survey Methodology and Questionnaire Design Theory Part II
Survey Methodology and Questionnaire Design Theory Part IISurvey Methodology and Questionnaire Design Theory Part II
Survey Methodology and Questionnaire Design Theory Part II
 
Analytic emperical Mehods
Analytic emperical MehodsAnalytic emperical Mehods
Analytic emperical Mehods
 
EBMgt Course Module 3: Why Do We Need It?
EBMgt Course Module 3: Why Do We Need It?EBMgt Course Module 3: Why Do We Need It?
EBMgt Course Module 3: Why Do We Need It?
 
Probsolv2007 engineering design processes pp ws
Probsolv2007   engineering design processes pp wsProbsolv2007   engineering design processes pp ws
Probsolv2007 engineering design processes pp ws
 

En vedette (9)

Carma internet research module n-bias
Carma internet research module   n-biasCarma internet research module   n-bias
Carma internet research module n-bias
 
Carma internet research module: Encouraging responding
Carma internet research module: Encouraging respondingCarma internet research module: Encouraging responding
Carma internet research module: Encouraging responding
 
Carma internet research module: Future data collection
Carma internet research module: Future data collectionCarma internet research module: Future data collection
Carma internet research module: Future data collection
 
The Promise And Folly Of A Unitary Doctoral
The Promise And Folly Of A Unitary DoctoralThe Promise And Folly Of A Unitary Doctoral
The Promise And Folly Of A Unitary Doctoral
 
Moving Data to and From R
Moving Data to and From RMoving Data to and From R
Moving Data to and From R
 
Carma internet research module survey design issues
Carma internet research module   survey design issuesCarma internet research module   survey design issues
Carma internet research module survey design issues
 
Carma internet research module getting started with question pro
Carma internet research module   getting started with question proCarma internet research module   getting started with question pro
Carma internet research module getting started with question pro
 
Basic Graphics with R
Basic Graphics with RBasic Graphics with R
Basic Graphics with R
 
Discovery informaticsstanton
Discovery informaticsstantonDiscovery informaticsstanton
Discovery informaticsstanton
 

Similaire à Carma internet research module: Survey reduction

Edu 702 group presentation (questionnaire) 2
Edu 702   group presentation (questionnaire) 2Edu 702   group presentation (questionnaire) 2
Edu 702 group presentation (questionnaire) 2
Dhiya Lara
 
Chapter 10 pandemic planPandemic planFalls under .docx
Chapter 10 pandemic planPandemic planFalls under .docxChapter 10 pandemic planPandemic planFalls under .docx
Chapter 10 pandemic planPandemic planFalls under .docx
bartholomeocoombs
 
Comu346 lecture 7 - user evaluation
Comu346   lecture 7 - user evaluationComu346   lecture 7 - user evaluation
Comu346 lecture 7 - user evaluation
David Farrell
 
Edu 702 group presentation (questionnaire)
Edu 702   group presentation (questionnaire)Edu 702   group presentation (questionnaire)
Edu 702 group presentation (questionnaire)
Azura Zaki
 
Reading 1 need assessment
Reading 1 need assessmentReading 1 need assessment
Reading 1 need assessment
Alex Tsang
 
SURVEY USAGE AND FINDING CORRELATIONS Survey and Correlat.docx
SURVEY USAGE AND  FINDING CORRELATIONS Survey and Correlat.docxSURVEY USAGE AND  FINDING CORRELATIONS Survey and Correlat.docx
SURVEY USAGE AND FINDING CORRELATIONS Survey and Correlat.docx
mabelf3
 
Research and advocacy by Seetal Daas
Research and advocacy by Seetal DaasResearch and advocacy by Seetal Daas
Research and advocacy by Seetal Daas
Seetal Daas
 
Designing a survey questionnaire
Designing a survey questionnaireDesigning a survey questionnaire
Designing a survey questionnaire
Argie Ray Butalid
 

Similaire à Carma internet research module: Survey reduction (20)

How to design questionnaire
How to design questionnaireHow to design questionnaire
How to design questionnaire
 
Questionnaires 6 steps for research method.
Questionnaires 6 steps for research method.Questionnaires 6 steps for research method.
Questionnaires 6 steps for research method.
 
Edu 702 group presentation (questionnaire) 2
Edu 702   group presentation (questionnaire) 2Edu 702   group presentation (questionnaire) 2
Edu 702 group presentation (questionnaire) 2
 
Chapter 10 pandemic planPandemic planFalls under .docx
Chapter 10 pandemic planPandemic planFalls under .docxChapter 10 pandemic planPandemic planFalls under .docx
Chapter 10 pandemic planPandemic planFalls under .docx
 
Comu346 lecture 7 - user evaluation
Comu346   lecture 7 - user evaluationComu346   lecture 7 - user evaluation
Comu346 lecture 7 - user evaluation
 
Lesson 5a_Surveys and Measurement 2023.pptx
Lesson 5a_Surveys and Measurement 2023.pptxLesson 5a_Surveys and Measurement 2023.pptx
Lesson 5a_Surveys and Measurement 2023.pptx
 
Survey design workshop
Survey design workshopSurvey design workshop
Survey design workshop
 
Edu 702 group presentation (questionnaire)
Edu 702   group presentation (questionnaire)Edu 702   group presentation (questionnaire)
Edu 702 group presentation (questionnaire)
 
Edu 702 group presentation (questionnaire)
Edu 702   group presentation (questionnaire)Edu 702   group presentation (questionnaire)
Edu 702 group presentation (questionnaire)
 
Reading 1 need assessment
Reading 1 need assessmentReading 1 need assessment
Reading 1 need assessment
 
Revisited module 2 wbs3630 2015
Revisited module 2 wbs3630 2015Revisited module 2 wbs3630 2015
Revisited module 2 wbs3630 2015
 
SURVEY USAGE AND FINDING CORRELATIONS Survey and Correlat.docx
SURVEY USAGE AND  FINDING CORRELATIONS Survey and Correlat.docxSURVEY USAGE AND  FINDING CORRELATIONS Survey and Correlat.docx
SURVEY USAGE AND FINDING CORRELATIONS Survey and Correlat.docx
 
HEALTHCARE RESEARCH METHODS: Primary Studies: Developing a Questionnaire - Su...
HEALTHCARE RESEARCH METHODS: Primary Studies: Developing a Questionnaire - Su...HEALTHCARE RESEARCH METHODS: Primary Studies: Developing a Questionnaire - Su...
HEALTHCARE RESEARCH METHODS: Primary Studies: Developing a Questionnaire - Su...
 
7027203.ppt
7027203.ppt7027203.ppt
7027203.ppt
 
Google Tool: Creating Surveys
Google Tool:  Creating SurveysGoogle Tool:  Creating Surveys
Google Tool: Creating Surveys
 
Usability Primer - for Alberta Municipal Webmasters Working Group
Usability Primer - for Alberta Municipal Webmasters Working GroupUsability Primer - for Alberta Municipal Webmasters Working Group
Usability Primer - for Alberta Municipal Webmasters Working Group
 
Experience Research Best Practices - UX Meet Up Boston 2013 - Dan Berlin
Experience Research Best Practices - UX Meet Up Boston 2013 - Dan BerlinExperience Research Best Practices - UX Meet Up Boston 2013 - Dan Berlin
Experience Research Best Practices - UX Meet Up Boston 2013 - Dan Berlin
 
Experience Research Best Practices
Experience Research Best PracticesExperience Research Best Practices
Experience Research Best Practices
 
Research and advocacy by Seetal Daas
Research and advocacy by Seetal DaasResearch and advocacy by Seetal Daas
Research and advocacy by Seetal Daas
 
Designing a survey questionnaire
Designing a survey questionnaireDesigning a survey questionnaire
Designing a survey questionnaire
 

Plus de Syracuse University

Carma internet research module scale development
Carma internet research module   scale developmentCarma internet research module   scale development
Carma internet research module scale development
Syracuse University
 
Mining tweets for security information (rev 2)
Mining tweets for security information (rev 2)Mining tweets for security information (rev 2)
Mining tweets for security information (rev 2)
Syracuse University
 
Carma internet research module detecting bad data
Carma internet research module   detecting bad dataCarma internet research module   detecting bad data
Carma internet research module detecting bad data
Syracuse University
 

Plus de Syracuse University (20)

Basic SEVIS Overview for U.S. University Faculty
Basic SEVIS Overview for U.S. University FacultyBasic SEVIS Overview for U.S. University Faculty
Basic SEVIS Overview for U.S. University Faculty
 
Why R? A Brief Introduction to the Open Source Statistics Platform
Why R? A Brief Introduction to the Open Source Statistics PlatformWhy R? A Brief Introduction to the Open Source Statistics Platform
Why R? A Brief Introduction to the Open Source Statistics Platform
 
Chapter9 r studio2
Chapter9 r studio2Chapter9 r studio2
Chapter9 r studio2
 
Basic Overview of Data Mining
Basic Overview of Data MiningBasic Overview of Data Mining
Basic Overview of Data Mining
 
Strategic planning
Strategic planningStrategic planning
Strategic planning
 
Carma internet research module scale development
Carma internet research module   scale developmentCarma internet research module   scale development
Carma internet research module scale development
 
Carma internet research module visual design issues
Carma internet research module   visual design issuesCarma internet research module   visual design issues
Carma internet research module visual design issues
 
Siop impact of social media
Siop impact of social mediaSiop impact of social media
Siop impact of social media
 
R-Studio Vs. Rcmdr
R-Studio Vs. RcmdrR-Studio Vs. Rcmdr
R-Studio Vs. Rcmdr
 
Getting Started with R
Getting Started with RGetting Started with R
Getting Started with R
 
Introduction to Advance Analytics Course
Introduction to Advance Analytics CourseIntroduction to Advance Analytics Course
Introduction to Advance Analytics Course
 
Installing R and R-Studio
Installing R and R-StudioInstalling R and R-Studio
Installing R and R-Studio
 
Mining tweets for security information (rev 2)
Mining tweets for security information (rev 2)Mining tweets for security information (rev 2)
Mining tweets for security information (rev 2)
 
What is Data Science
What is Data ScienceWhat is Data Science
What is Data Science
 
Reducing Response Burden
Reducing Response BurdenReducing Response Burden
Reducing Response Burden
 
PACIS Survey Workshop
PACIS Survey WorkshopPACIS Survey Workshop
PACIS Survey Workshop
 
Carma internet research module: Sampling for internet
Carma internet research module: Sampling for internetCarma internet research module: Sampling for internet
Carma internet research module: Sampling for internet
 
Carma internet research module: Research design catalog
Carma internet research module: Research design catalogCarma internet research module: Research design catalog
Carma internet research module: Research design catalog
 
Stanton eScience Presentation
Stanton eScience PresentationStanton eScience Presentation
Stanton eScience Presentation
 
Carma internet research module detecting bad data
Carma internet research module   detecting bad dataCarma internet research module   detecting bad data
Carma internet research module detecting bad data
 

Dernier

Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 

Dernier (20)

PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural ResourcesEnergy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
Energy Resources. ( B. Pharmacy, 1st Year, Sem-II) Natural Resources
 

Carma internet research module: Survey reduction

  • 1. Survey ReductionTechniques CARMA Internet Research Module Jeffrey Stanton
  • 2. Primary Goal: Reduce Administration Time Secondary goals Reduce perceived administration time Increase the engagement of the respondent with the experience of completing instrument  lock in interest and excitement from the start Reduce the extent of missing and erroneous data due to carelessness, rushing, test forms that are hard to use, etc. Increase the respondents’ ease of experience (maybe even enjoyment!) so that they will persist to the end AND that they will respond again next year (or whenever the next survey comes out) Conclusions? Make the survey SEEM as short and compact as possible Streamline the WHOLE EXPERIENCE from the first call for participation all the way to the end of the final page of the instrument Focus test-reduction efforts on the easy stuff before diving into the nitty-gritty statistical stuff 2
  • 3. 3 Please choose the option that most closely fits how youdescribe yourself. Please select only one of the two options: Female [] Male []
  • 4. Instruction Reduction Fewer than 4% of respondents make use of printed instructions:Novick and Ward (2006, ACM-SIGDOC) Comprehension of instructions only influences novice performance on surveys: Catrambone(1990; HCI) Instructions on average are written five grade levels above average grade level of respondent; 23% of respondents failed to understand at least one element of instructions: Spandorferet al. (1993; Annals of EM) Unless you are working with a special/unusual population, you can assume that respondents know how to complete Likert scales and other common response formats without instructions Most people don’t read instructions anyway. When they do, the instructions often don’t help them respond any better! If your response format is so novel that people require instructions, then you have a substantial burden to pilot test, in order to ensure that people comprehend the instructions and respond appropriately. Otherwise, do not take the risk! 4
  • 5.
  • 6. Self-completed demographic data frequently containing missing fields or intentional mistakes5
  • 7.
  • 8. Respondents should feel like demographics are not serving to identify them in their survey responses.
  • 9. You could offer respondents two choices: match (or automatically fill in) some/all demographic data using the code number provided in your invitation email (or on a paper letter); they fill in the demographic data (on web-based surveys, a reveal can branch respondents to the demographics page) 6
  • 10. Eligibility If a survey has eligibility requirements, the screening questions should be placed at the earliest possible point in the survey. (eligibility requirements can appear in instructions, but this should not be the sole method of screening out ineligible respondents) Skip Logic Skip logic actually shortens the survey by setting aside questions for which the respondent is ineligible. Branching Branching may not shorten, but can improve the user experience by offering questions specifically focused to the respondent’s demographic or reported experience. 7 Illustration credit: Vovici.com Eligibility, Skip Logic, and Branching
  • 11. Ever answer a survey where you knew that your answer would predict how many questions you would have to answer after that? e.g., “How many hotel chains have you been to in the last year?” If users can predict that their eligibility, the survey skip logic, or survey branching will lead to longer responses, more complex responses, or more difficult or tedious responses, they may: Abandon the survey Backup and change their answer to the conditional with less work (if the interface permits it). Branch design should try not to imply what the user would have experienced in another branch. Paths through the survey should avoid causing considerably more work for some respondents than for others. 8 Implications: Eligibility, Skip Logic, and Branching
  • 12. Panel Designs and/or Multiple Administration Panel designs measure the same respondents on multiple occasions. Typically either predictors are gathered at an early point in time, and outcomes gathered at a later point in time, or both predictors and outcomes are measured at every time point. (There are variations on these two themes). Panel designs are based on maturation and/or intervention processes that require the passage of time. Examples: career aspirations over time, person-organization fit over time, training before/after Minimally, panel designs can help mitigate (though not solve) the problem of common method bias; e.g., responding to a criterion at time 2, respondents tend to forget how they responded at time 1. 9
  • 13. Panel Designs and/or Multiple Administration Survey designers can apply the logic of panel designs to their own surveys: Sometimes, you have to collect a large number of variables (no measure shortening), and it is impractical to do so in a single administration. Generally speaking: Better to have a many short, pleasant survey administrations with a cumulative “work time lost” of an hour vs. long and grinding one hour-long survey. The former can get you happier and less fatigued respondents and better data, hopefully. In the limit, consider the implications of a “Today’s Poll” approach to measuring climate, stress, satisfaction, or other attitudinal variables: One question per day, every day…. 10
  • 14. Unobtrusive Behavioral Observation Surveys appear convenient and relatively inexpensive in and of themselves…however, the cumulative work time lost across all respondents may be quite large. Methods that assess social variables through observations of overt behavior rather than self report can provide indications of stress, satisfaction, organizational citizenship, intent to quit, and other psychologically and organizationally relevant variables. Examples Cigarette breaks over time (frequency, # of incumbents per day); Garbage (weight of trash before/after a recycling program); Social media usage (tweets, blog posts, Facebook); Wear of floor tiles Absenteeism or tardiness records; Incumbent, team and department production quality and quantity measures 11
  • 15. Unobtrusive Behavioral Observation Most unobtrusive observations must be conducted over time: Establish a baseline for the behavior. Examine subsequent time periods to examine changes/trends over time. Generally, much more labor intensive data collection than surveys. Results should be cross-validated with other types of evidence. 12
  • 16. Scale Reduction and One-item Measures Standard scale construction calls for “sampling the construct domain” with items that tap into different aspects of the construct with items that refer to various content areas. Scales with more items can include a larger sample of the behaviors or topics relevant to the construct. 13 RELEVANT measuring what you want measure Construct Domain Item Content CONTAMINATED measuring what you don’t want to measure DEFICIENT not measuring what you want to measure
  • 17. Scale Reduction and One-item Measures When fewer items are used, by necessity they must be either more general in wording to obtain full coverage (hopefully) more narrow to focus on a subset of behaviors/topics Internal consistency reliability reinforces this trade-off: As the number of items gets smaller, inter-item correlation must rise to maintain a given level of internal consistency. However, scales with fewer than 3-5 items rarely achieve acceptable internal consistency without simply becoming alternative wordings of the same questions. Discussion: How many of you have taken a measure where you were being asked the same question again and again? Your reactions? Why was this done? The one-item solution: A one-item measure usually “covers” a construct only if is highly non-specific. A one item measure has a measurable reliability (see Wanous & Hudy; ORM, 2001), but the concept of internal consistency is meaningless. Discuss: A one-item knowledge measure vs. a one-item job satisfaction measure. 14
  • 18. One-item Measure Literature Research using single item measures of each of the five JDI job satisfaction facets and found correlations between .60 and .72 to the full length versions of the JDI scalesNagy (2002) Review of single-item graphical representation scales; so called “faces” scales Patrician (2004) Single item graphic scale for organizational identificationShamir & Kark (2004) Research finding that single item job satisfaction scales systematically overestimate workers’ job satisfactionOshagbemi(1999) Single item measures work best on “homogeneous” constructsLoo (2002) 15
  • 19. Scale Reduction:Technical Considerations Items can be struck from a scale based on three different sets of qualities:  1. Internal item qualities refer to properties of items that can be assessed in reference to other items on the scale or the scale's summated scores.  2. External item qualities refer to connections between the scale (or its individual items) and other constructs or indicators.  3. Judgmental item qualities refer to those issues that require subjective judgment and/or are difficult to assess in isolation of the context in which the scale is administered Literature review suggests that the most widely used method for item selection in scale reduction is some form of internal consistency maximization  Corrected item-total correlations provide diagnostic information about internal consistency. In scale reduction efforts, item-total correlations have been employed as a basis for retaining items for a shortened scale version  Factor analysis is another technique that, when used for scale reduction, can lead to increased internal consistency, assuming one chooses items that load strongly on a dominant factor 16
  • 20. Scale Reduction II Despite their prevalence, there are important limitations of scale reduction techniques that maximize internal consistency.  Choosing items to maximize internal consistency leads to item sets highly redundant in appearance, narrow in content, and potentially low in validity  High internal consistency often signifies a failure to adequately sample content from all parts of the construct domain  To obtain high values of coefficient alpha, a scale developer need only write a set of items that paraphrase each other or are antonyms of one other. One can expect an equivalent result (i.e., high redundancy) from using the analogous approach in scale reduction, that is, excluding all items but those highly similar in content. 17
  • 21. Scale Reduction III IRT provides an alternative strategy for scale reduction that does not focus on maximizing internal consistency.  One should retain items that are highly discriminating (i.e., moderate to large values of a) and one should attempt to include items with a range of item thresholds (i.e., b) that adequately cover the expected range of the trait in measured individuals  IRT analysis for scale reduction can be complex and does not provide a definitive answer to the question of which items to retain; rather, it provides evidence for which items might work well together to cover the trait range Relating items to external criteria provides a viable alternative to internal consistency and other internal qualities  Because correlations vary across different samples, instruments, and administration contexts, an item that predicts an external criterion best in one sample may not do so in another.  Choosing items to maximize a relation with an external criterion runs the risk of a decrease in discriminant validity between the measures of the two constructs. 18
  • 22. Scale Reduction IV The overarching goal of any scale reduction project should be to closely replicate the pattern of relations established within the construct's nomologicalnetwork.  In evaluating any given item's relations with external criteria, one should seek moderate correlations with a variety of related scales (i.e., convergent validity) and low correlations with a variety of unrelated measures Researchers may also need to examine other criteria beyond statistical relations to determine which items should remain in an abbreviated scale. Clarity of expression, its relevance to a particular respondent population, the semantic redundancy of an item's content with other items, the perceived invasiveness of an item, and an item's "face" validity. Items lacking apparent relevance, or that are highly redundant with other items on the scale, may be viewed negatively by respondents. To the extent that judgmental qualities can be used to select items with face validity, both the reactions of constituencies and the motivation of respondents maybe enhanced Simple strategy for retention that does not require IRT analysis: Stepwise regression  Rank ordered item inclusion in an "optimal" reduced-length scale that accounts for a nearly maximal proportion of variance in its own full-length summated scale score.  Order of entry into the stepwise regression is a rank order proxy indicating item goodness  Empirical results show that this method performs as well as a brute force combinatorial scan of item combinations; method can also be combined with human judgment to pick items from among the top ranked items (but not in strict ranking order) 19
  • 23.
  • 24. the less work time that is lost
  • 25. the higher chance that one or more constructs will perform poorly if the measures are not well established/developed
  • 26. less information might be obtained about each respondent and their score on a given construct
  • 27. have to sell its meaningfulness to decision makers who will act on the results20
  • 28. Bibliography Binning, J. F., & Barrett, G. V. (1989). Validity of personnel decisions: A conceptual analysis of the inferential and evidential bases. Journal of Applied Psychology, 74, 478-494. Catrambone, R. (1990). Specific versus general procedures in instructions. Human-Computer Interaction, 5, 49-93. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2008). Internet, mail, and mixed-mode surveys: The tailored design method. Hoboken, NJ: Wiley. Donnellan, M. B., Oswald, F. L., Baird, B. M., & Lucas, R. E. (2006). The Mini-IPIP scales: Tiny-yet-effective measures of the Big Five factors of personality. Psychological Assessment, 18, 192-203. Emons, W. H. M., Sijtsma, K., & Meijer, R. R. (2007). On the consistency of classification using short scales. Psychological Methods, 12, 105-12. Girard, T. A., & Christiansen, B. K. (2008). Clarifying problems and offering solutions for correlated error when assessing the validity of selected-subtest short forms. Psychological Assessment, 20, 76-8. Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of Management, 21, 967-988. Levy, P. (1968). Short-form tests: A methodological review. Psychological Bulletin, 6, 410-416. Loo, R. (2002). A caveat on using single-item versus multiple-item scales. Journal of Managerial Psychology, 17, 68-75. Lord, F. M. (1965). A strong true-score theory, with applications. Psychometrika, 3, 239-27. Nagy, M. S. (2002). Using a single item approach to measure facet job satisfaction. Journal of Occupational and Organizational Psychology, 75, 77-86. Novick, D. G., & Ward, K. (2006). Why don't people read the manual? Paper presented at the SIGDOC '06 Proceedings of the 24th Annual ACM International Conference on Design of Communication. Oshagbemi, T. (1999). Overall job satisfaction: how good are single versus multiple-item measures? Journal of Managerial Psychology, 14, 388-403. Patrician, P. A. (2004). Single-item graphic representational scales. Nursing Research, 53, 347-352. Shamir, B., & Kark, R. (2004). A single item graphic scale for the measurement of organizational identification. Journal of Occupational and Organizational Psychology, 77, 115-123. 21
  • 29. Bibliography (Continued) Smith, G. T., McCarthy, D. M., & Anderson, K. G. (2000). On the sins of short form development. Psychological Assessment, 12, 102-111. S pandorfer, J. M., Karras, D. J., Hughes, L. A., & Caputo, C. (1995). Comprehension of discharge instructions by patients in an urban emergency department. Annals of Emergency Medicine, 25, 71-74. Stanton, J. M., Sinar, E., Balzer, W. K., Smith, P. C., (2002). Issues and strategies for reducing the length of self-report scale. Personnel Psychology, 55, 167-194. Wanous, J. P., & Hudy, M. J. (2001). Single-item reliability: A replication and extension. Organizational Research Methods, 4, 361-375. Widaman, K. F., Little, T. D., Preacher, K. J., Sawalani, G. M. (2011). On creating and using short forms of scales in secondary research. In K. H. Trzesniewski, M. B. Donnellan, & R. E. Lucas (Eds.). Secondary data analysis: An introduction for psychologists (pp. 39-61). Washington, DC: American Psychological Association. 22