Microservices, Docker deploy and Microservices source code in C#
KPI-UX-BostonUPA-20120507
1. Best Practices for Defining, Evaluating &
Communicating Key Performance
Indicators (KPIs) of User Experience
Meng Yang
User Experience Researcher
IBM Software Group
2. Agenda
• Why measuring user experience
• What user experience KPIs or metrics to use
• How to communicate user experience metrics
• Best practices and future work
3. If you cannot measure it, you cannot improve it.
-Lord Kelvin
4. Design: intuition-driven or data-driven?
Reference: Metrics-driven design by Joshua porter
http://www.slideshare.net/andrew_null/metrics-driven-design-by-joshua-porter
5. 5 Reasons why metrics are a designer’s best friend
• Metrics reduce arguments based on opinion.
• Metrics give you answers about what really works.
• Metrics show you where you’re strong as a designer.
• Metrics allow you to test anything you want.
• Clients or stakeholders love metrics.
Reference: Metrics-driven design by Joshua porter in Mar. 2011
http://www.slideshare.net/andrew_null/metrics-driven-design-by-joshua-porter
6. 7 Ingredients of a successful UX strategy
• Business strategy
• Competitive benchmarking
• Web analytics
• Behavioral segmentation, or personas, and usage scenarios
• Interaction modeling
• Prioritization of new features and functionality
• Social / mobile / local
Reference: Paul Bryan’s article on UXmatters in October, 2011.
http://www.uxmatters.com/mt/archives/2011/10/7-ingredients-of-a-successful-ux-strategy.php
7. Lean start-up/lean UX movement
Reference:: Lean Startup by Eric Ries http://theleanstartup.com/principles
8. Agenda
• Why measuring user experience
• What user experience KPIs or metrics to use
• How to communicate user experience metrics
• Best practices and future work
9. Characteristics of good metrics
• Actionable [1]
• Business alignment [2]
• Accessible
• Honest assessment
• Auditable
• Consistency
• Powerful
• Repeatability and
• Low-cost
reproducibility
• Easy-to-use
• Actionability
• Time-series tracking
• Predictability
• Peer comparability
Reference:
[1]. Book by Eric Ries: Lean Start up : http://theleanstartup.com/
[2]. Book by Forrest Breyfogle: “Integrated Enterprise Excellence Volume II: Business Deployment”
10. User experience metrics used
Task-level Product-level
Task success rate System Usability Scale (SUS) score
Task easiness rating (SEQ) Net Promoter Score (NPS)
Task error rate
Task time
Clickstream data
- First click analysis
- Heat map
- Number of clicks
(CogTool) task time and clicks for
optimal path
11. System Usability Scale (SUS)
• Why chosen?
• The most sensitive post-study questionnaire. [2]
• Free, short, valid, and reliable. [1]
• A single SUS score can be calculated and a grade
can be assigned. [1]
• Over 500 user studies to be compared with. [1]
• Chose positive version because no significant
differences found with mixed version but easier for
people to answer. [2]
• Customized levels for color coding
• Level 0 (poor) : Less than 63
• Level 1(minimally acceptable): 63-72 [63 is the
lower range of C-]
• Level 2 (good): 73 - 78 [73 is the lower range of
B-]
• Level 3 (Excellent): 79 - 100 [79 is the lower range
of A-]
Reference:
[1] Jeff Sauro's blog entry: Measuring Usability with the System Usability Scale (SUS)
http://www.measuringusability.com/sus.php
[2] Book by Jeff Sauro & James Lewis: Quantifying the User Experience: Practical Statistics for User Research.
12. Net Promoter Score (NPS)
• Why chosen?
• Industry standard and widely popular.
• Benchmark data to compare.
• Customized levels
• Level 0 (Poor): Less than 24%
• Level 1 (Minimally acceptable): 24% - 45%
[24% is the lower range for computer
software]
• Level 2 (Good): 46% - 67% [46 is the
mean for computer software]
• Level 3 (Excellent): 68% - 100% [68% is
the upper range for computer software
Reference:
[1] NPS website: http://www.netpromoter.com/why-net-promoter/know/
13. Task success rate
• Why chosen?
• Easy to collect
• Easy to understand
• Popular among the UX community [1]
• Customized levels
• Fail : less than 75%
• Pass: 75% or more
Reference:
[1] Jacob Nielsen’s article on usability metrics in Jan. 2001: http://www.useit.com/alertbox/20010121.html
14. Task ease of use: SEQ (Single Ease Question)
• Why chosen?
• Reliable, sensitive & valid. [1]
• Short, easy to respond to, easy to administer & easy to score [1]
• The secondly most sensitive after-task questions, next to SMEQ, but much
simpler. [2]
• Customized levels for color coding
• Fail : less than 75%
• Pass: 75% or more
Reference:
[1] Jeff Sauro's blog entry: If you could only ask one question, use this one
http://www.measuringusability.com/blog/single-question.php
[2] Book by Jeff Sauro & James Lewis: Quantifying the User Experience: Practical Statistics for User Research
15. Summary of user experience metrics chosen
Customized Success
Metrics
Definition
Why chosen
Methods
criteria
• easy to collect • large scale
percentage of tasks usability testings • Fail : less than 75%
Task success • easy to understand
that users complete • small scale • Pass: 75% or more
rate • popular among the UX
successfully usability testings
community
• reliable, sensitive & valid
• short, easy to respond to, easy • large scale
one standard Single usability testings • Fail : less than 75%
Task ease of to administer & easy to score
Ease Question (SEQ) • small scale • Pass: 75% or more
use • the second most sensitive after-
task questions, next to SMEQ, usability testings
but much simpler
• Poor : Less than 24%
• large scale • Minimally acceptable :
one standard • industry standard and widely
Net Promoter usability testings 24% - 45%
recommendation popular
Score (NPS) • large scale • Good : 46% - 67%
question • benchmark data to compare
surveys • Excellent : 68% -
100%
• free, short, valid, and reliable. • Poor : Less than 63
• a single SUS score can be • Minimally acceptable :
a list of 10 standard calculated and a grade can be • large scale 63-72
System ease of use questions assigned. usability testings • Good : 73 – 78
Usability (positive version) • over 500 user studies to be • large scale • Excellent : 79 - 100
Scale (SUS) compared with. surveys
• the most sensitive post-study
questionnaire.
16. Clickstream data
• Good to have
• Yet another way to visually illustrate the
problems which are shown in other metrics
such as task success rate and easiness ratings.
• Navigation path is very helpful to have.
• But hard to implement & analyze in
UserZoom
• Approach 1: asking participants to install a
plugin, which reduces the participation rate.
• Approach 2: inserting a line of javascript code
on every page of the website, which is hard
to achieve.
17. Task time
• Good for benchmark comparison
• between prototypes/releases.
• with competitors.
• But hard to be accurate
• For large-scale studies, people might be multi-tasking.
• For small-scale studies, people might be asked to think aloud.
• You don’t know how hard people have tried the task.
18. Agenda
• Why measuring user experience
• What user experience KPIs or metrics to use
• How to communicate user experience metrics
• Best practices and future work
20. User experience scorecard examples
High priority but poor
usability tasks should
be the focus area
Core use cases that has
the most failed tasks
should be the focus
Fake data for illustration purposes
21. Illustrated task flow by task performance data
Fake data for illustration purposes
23. User story mapping (in exploration)
Reference:
Incorporate tasks and
metrics into the agile
[1] J User story mapping presentation by Steve Rogalsky
development process
http://www.slideshare.net/SteveRogalsky/user-story-mapping-11507966
24. Agenda
• Why measuring user experience
• What user experience KPIs or metrics to use
• How to communicate user experience metrics
• Best practices and future work
25. Best practices
• Great executive buy-in on the user experience metrics and
dashboard.
• Focus on core use cases and top tasks to evaluate.
• Use standardized questions/metrics for peer comparability.
• Try random sampling instead of convenient sampling when
recruiting participants for large-scale usability testings.
• Visualization is the key to effective communication.
• KPIs/metrics catch people’s attention, but qualitative information
provides the insights.
26. Future work
• Align UX metrics with business goals.
• User experience vs. customer experience
• Apply metrics on interaction models and scenarios.
• Communicate UX metrics to influence product strategy.
• Incorporate UX metrics in the agile development process.
• Collaborate with analytics team to gather metrics such as
Engagement/Adoption/Retention and cohort analysis.
• How do we measure usefulness (vs. ease of use)?
UXR working in IBM Lotus for 7 years. Working on various products including Notes, ST, SmartCloud, but mostly social computing. In recent years, we have more focus on quantitative user experience research. Conducting large-scale unmoderated usability studies, and large-scale surveys. And also build our dashboard. Good communication tool. Still do lots of iterative usability testings, and user interviews. More triangulation of the quantitative and qualitative research.
Lord Kelvin was a physicist Qualitative vs. quantiative data in UX.