Watch the entire webinar: http://info.userzoom.com/online-surveys-design-webinar.html
UserZoom teamed up with Elizabeth Ferrall-Nunge, User Experience Research Lead at Twitter, to discuss how to create effective surveys and how to avoid common survey pitfalls.
2. Speakers
Elizabeth Ferrall-Nunge Alfonso de la Nuez
User Research Lead at Co-Founder & Co-CEO at
Twitter UserZoom
@enunge @delanuez23
Twitter Hashtag #uzwebinar
3. Webinar Agenda
• When survey is appropriate
• Attributes for a good survey
• Question types & when to use each
• Questionnaire biases
• Implementation considerations
• UX and usability-focused survey types
4. About UserZoom
• All-in-one Enterprise software solution that
helps UX Pros cost-effectively
test, measure and improve Customer
Experience over websites and mobile
apps. Product Suite:
Unmoderated Remote Usability Testing
Online Surveys (web & mobile)
• We specialize in online (or remote)
Online Card Sorting
research & usability testing, saving
Tree Testing
time, money and effort, while still obtaining
Screenshot Click Testing
very rich insights
Screenshot Timeout Testing (5-sec test)
Web VOC
• In business since 2002 (2009 as Mobile VOC
SaaS), offices in Sunnyvale (CA) User Recruitment Tool
Manchester (UK), Munich (DE) and
Barcelona (Spain) Follow us on Twitter @userzoom
5. Webinar History
Developed in collaboration with Aaron Sedley and
Hendrik Mueller while at Google.
Materials presented at HCIC, UX Australia, etc.
7. Survey Strengths
Good for:
• Attitudes
• Perceptions
• Likes & Dislikes
• Goals/Intent
• Task success Note: more on task-based surveys ahead
• User characteristics
• Tracking over time
• Comparisons
8. Survey Weaknesses
Not appropriate for:
• Usability & comprehension (Usability)
• Cause and effect (Experiments)
• User motivations (Interviews, Observation)
• Precise user behavior, flows, context (Logs)
• Bugs (Feedback forms)
• Behaviors people are unwilling to report
• Prioritizing features (Multiple methods)
9. Complement Other Methods
Survey
research (quant)
Is my data anecdotal or Why are we seeing
representative? this trend?
Small sample
research (qual)
11. Elements of Quality Surveys
Data accurately represent the target population
Validity:
Responses measure the dimensions of interest
Reliability:
Responses are consistent over time & samples
Questionnaire minimizes biases
Desired level of precision for key measurements
• Statistically valid comparisons
13. Stages of Survey Research
1. Identifying research goals and constructs
2. Determining how to sample your population
3. Question types and when to use each of them
4. Questionnaire biases to avoid
5. Other survey design considerations
6. Testing and optimizing of the survey
7. Implementation considerations & fielding
8. Survey analysis fundamentals
16. How to Sample
• What population do you want to measure?
• How many users do you have?
• What level of precision do you want?
• Do you want to segment by 'groups'?
o What's the smallest group to compare with?
• How will you invite people & field the survey?
Recommendations
• Random sampling is always better!!!
• Do not survey users more than 2-4x a year
• Target ~400 responses per segment
o 384 gives a +/-5% margin of error
• Start with small %, track response rate, adapt
18. Types of Survey Questions
Open-ended questions: Closed-ended questions:
• •
Universe of answers is Rate a single object
unknown • After universe of
• Select one object from questions is known
a really large amount
19. Open-Ended: Options Unknown
What, if anything, do you find
frustrating or unappealing about
your smart phone?
What services or applications
would you like to integrate
with Pinterest? ✓
20. Open-Ended: Too Many Options
What was the make and model
of your first car?
What is your favorite meal?
What is your favorite thing
about working at Google? ✓
21. Open-Ended: Natural Metrics
How many hours did you work
last week?
How many times a day do you
use your phone to get directions?
✓
23. Closed-Ended: Answer Options Clear
How often do you withdraw cash
from an ATM?
__ Less than once a month
__ About once a month
__ About 2-3 times a month
__ About once a week
✓
__ A few times a week
__ About once a day
__ Multiple times a day
24. Closed-Ended: Ranking Questions
Rank the following main dinner
courses in order of preference.
Rank answers from highest (1) to lowest (6).
___ Fried chicken
___ Beef stew
___ Kangaroo steak
✓
___ Seared tuna
___ Spaghetti
___ Seasonal vegetables
25. Closed-Ended: w/o Natural Metrics
Overall, how satisfied are you
with Google Drive?
"I'm 6 satisfied."
"I'm moderately satisfied."
...on a 7-point scale from extremely dissatisfied
to extremely satisfied
✓
27. Closed-Ended: Unipolar vs. Bipolar
Unipolar measures Bipolar measures
• Starts from zero • Starts at extreme
• No natural midpoint negative
• Goes to an extreme • Has a natural midpoint
• Goes to opposite extreme
positive
5 scale points 7 scale points
Not at all ..., Slightly..., Moderately..., Extremely...,, Moderately..., Slightly...,
Very..., Extremely... Neither ... nor
..., Slightly..., Moderately..., Extremely.
..
28. Closed-Ended: Prioritizing
If you really need prioritization help:
Select up to 3 features that are
most important to you.
✓
How important is
each feature to you?
30. Overview of Questionnaire Biases
1. Satisficing:
Short-cutting question answers
2. Acquiescence:
Tendency to agree to any statement
3. Social desirability:
Sticking to norms & expectations
4. Order bias:
Tendency to answer in certain way depending on the
question or response order
31. 1. Satisficing
= Respondents shortcut answering questions
= People attempt to make guesses!
Reasons:
• Question difficulty is high
• Cognitive ability to understand & answer is low
• Motivation to answer accurately is low
• Fatigue occurs due to a long questionnaire
32. 1. Satisficing: Difficult Questions
How many searches did you
conduct last year? ✘
Avoid difficult or complex questions
Capture actual behavior
Ask about today's goals
33. 1. Satisficing: Complex Questions
✘
Shorten questions and answers.
Keep wording as simple as possible.
34. 1. Satisficing: No opinion, n/a, ...
Satisfaction with your smartphone:
__ Very dissatisfied
__ Slightly dissatisfied
__ Neither satisfied nor dissatisfied
__ Slightly satisfied
__ Very satisfied
__ No opinion
✘
Avoid "no opinion" (or similar) answers
Break into two questions
If you need it, make it visually distinct
35. 1. Satisficing: Large Grid Questions
Avoid large grid questions
✘
Consider separate questions for each
Add alternate row shading
36. 2. Acquiescence Bias
= Respondents tend to agree to any statement
Reasons:
• Cognitive ability or motivation to answer is low
• Question difficulty or complexity is high
• Personality tendencies skew towards
agreeableness
• Social norms suggest a "yes" response
37. 2. Acquiescence: Binary Questions
Has using Picasa increased the number of
photos you share with your friends
or colleagues? ✘
Yes
No
Avoid binary question types (Y/N, T/F)
Ask construct-specific questions
Measure attitudes on unbiased scale
39. 2. Acquiescence: Agreement Scales
Avoid agreement scales
✘
Ask construct-specific questions
Measure attitudes on unbiased scale
Indicate your level of trust with Shopbop?
[Extremely distrust,...,Extremely trust] ✓
40. 3. Social Desirability
= Respondents stick to norms & expectations
Reasons:
• Opinion does not conform with social norms
• Feeling uncomfortable about answering
• Asked to provide opinion on sensitive topics
• Asked to provide identity
41. 3. Social Desirability: Social Norms
How many servings of fruits and vegetables
do you consume daily?
How frequently do advertisements influence
your purchases?
✘
Avoid such questions
42. 3. Social Desirability: Sensitive Topics
✘
Indicate your level of racism:
__ Not at all racist
__ Slightly racist
__ Moderately racist
__ Very racist
__ Extremely racist
Avoid sensitive questions
Allow respondents to answer anonymously
Use self-administered surveys
43. 3. Social Desirability: Identity
What is your full name?
What is your home address?
✘
For sensitive topics, allow respondents
to answer anonymously
Use self-administered surveys
44. 4. Response Order Bias
= Tendency to select answers at the beginning
(primacy) or end (recency) of an answer list/scale
Reasons:
• Unconsciously apply meaning based on order
• Answer list is too long
• Answer list cannot be viewed as a whole
• Appropriate answer cannot be easily identified
45. 4. Response Order Bias: Answer List
Best
✘
Typical
Worst
Randomize the answer list order
46. 4. Question Order Bias
= Respondent answers are influenced by the
order in which questions appear in the survey
Reasons:
• Attention is drawn to dimensions which may
not have otherwise been considered
47. 4. Question Order Bias: Example
1. Which of the following features
would you like to see improved?
✘
[Battery life, weight, screen size, ...]
2. What do you find most
frustrating about your
smartphone?
49. Leading Questions
Avoid leading
✘
questions
Ask questions,
Do you agree or disagree with not statements.
this statement: I liked the
surveys workshop
a great deal.
✘ Measure
attitude on a
neutral scale.
50. Recall and Prediction
Do you prefer the previous or the
current version of Facebook?
Would you like Walmart more if
its aisles were
less cluttered? ✘
Avoid such questions entirely
Ask before and after, then compare
Ask for each version, then compare
51. Reference Periods
How many times did you work
from home in Q1?
Define reference periods
✘
Avoid terms that may be misinterpreted
State references at the beginning
✓
Between January 1 to March 31, 2012, how
many times did you work from home?
52. Cute Language
Overall, what do you think of our
new mobile app?
__ It's great!
__ Only OK. A little confusing.
__ This UI sucks. Are you guys a bunch
✘
of baboons?
Don't get cute.
Use simple & straightforward language.
53. Broad Questions
How well do you know
your coworkers? ✘
Avoid broad questions
Figure out what you want to measure
In the past month, how many times did
you see your Tech Lead outside of
work? ✓
54. Double-Barreled Questions
How satisfied are you with the
billing and payment options?
✘
Avoid asking about multiple things
Use separate questions
How satisfied are you with the billing options?
How satisfied are you with the payment options?
✓
55. Launch Readiness
Is the redesign ready to launch?
Which of the following features
should Instagram work on next?
✘
Avoid hypotheticals
Ask about current experiences
What, if anything, do you find frustrating
or unappealing about Instagram? ✓
74. Maximizing Response Rates
Dillman's Total Design Method (1978):
o Put questions directly related to the topic upfront
o Make the questionnaire appear small/short
o Personalize the invitation for each respondent
o Explain overall usefulness & importance of
respondent
o Explain the confidentiality of the collected data
o Pre-announce the survey a week in advance
o Send reminders after 1 and 3 weeks
75. Maximizing Response Rates
Other things to keep in mind:
o Explain your relationship to the respondent
o Potentially offer small gifts as incentives
o For email surveys, Mondays appear to be the best
day
78. Surveys within the UCD Process
UserZoom’s Integrated Online Research Platform
For UX Research:
• Test live website, mobile app
• Analyze competitors
• Understand your visitors
For UX Design:
For CX Measurement: • Information architecture
• Web VOC • Validate design
• Mobile VOC • Iterative testing (AGILE)
Picture source: SAP
79. Task-based Survey
In an online, task-based survey, a large sample of participants (typically 100 to 200) are asked to complete
navigational tasks on a website or prototype such as looking for information, registering, making a purchase
or reservation, etc. It is focused on performance and satisfaction.
Task: Please locate the most popular full TV episode of all time on Hulu. Please make not of the
episode as you will be asked for it later. Click on the success button once you are done.
80. Task-based Survey
Validation question, example from a pilot study. Hulu is not UserZoom’s customer.
81. Card Sorting Survey
Card sorts help improve the way information is structured on the site so that it
matches users’ mental models.
Open card sort
82. Card Sorting Survey Instructions:
1. Please start by reading each of
Closed Card Sort the items on the left
2. Sort the items into meaningful
groups by dragging from the left
and dropping on the right
Closed card sort
83. Tree Testing Survey
Tree testing complements card sorting by testing the site structure created from
card sorting.
1. You are on an office supplies website. Where would you go to find a mouse
pad? Please click through the menu until you locate where you would expect to
find it.
Instructions:
1. You'll be asked to find to find an
item using a menu structure.
2. Keep clicking through until you have
located the item.
3. You can always go back to search in
other areas.
85. Click Testing
Where would you click to find more information about the reliability of suppliers? Please click once.
After you have clicked once, please hit Next.
86. Voice of Customer (VOC) Survey
Find out things like:
Who are the users that visit your website?
Why do they visit?
Are they able to navigate successfully?
Would they recommend it to others?
88. Recommended Reading
• Groves, Fowler, Couper, et al. (2009), Survey
Methodology
• J. Wright (2010), Handbook of Survey Research
• Floyd J., Fowler Jr. (1995), Improving Survey
Questions: Design and Evaluation
• Albert W., Tullis T., Tedesco D. (2010), Beyond the
Usability Lab: Conducting Large-scale Online
User Experience Studies