SlideShare une entreprise Scribd logo
1  sur  27
Télécharger pour lire hors ligne
Best Practices for Defining, Evaluating &
   Communicating Key Performance
 Indicators (KPIs) of User Experience 	



                         Meng Yang	

                 User Experience Researcher	

                    IBM Software Group
Agenda	


  •  Why measuring user experience	

  	

  •  What user experience KPIs or metrics to use	

  •  How to communicate user experience metrics	

  	

  •  Best practices and future work
If you cannot measure it, you cannot improve it. 	

                                                       -Lord Kelvin
Design: intuition-driven or data-driven?	





                      Reference: Metrics-driven design by Joshua porter
          http://www.slideshare.net/andrew_null/metrics-driven-design-by-joshua-porter
5 Reasons why metrics are a designer’s best friend	

 •    Metrics reduce arguments based on opinion.	

 •    Metrics give you answers about what really works.	

 •    Metrics show you where you’re strong as a designer.	

 •    Metrics allow you to test anything you want.	

 •    Clients or stakeholders love metrics.	





 Reference: Metrics-driven design by Joshua porter in Mar. 2011
    http://www.slideshare.net/andrew_null/metrics-driven-design-by-joshua-porter
7 Ingredients of a successful UX strategy	

 •    Business strategy	

 •    Competitive benchmarking	

 •    Web analytics	

 •    Behavioral segmentation, or personas, and usage scenarios	

 •    Interaction modeling	

 •    Prioritization of new features and functionality	

 •    Social / mobile / local	





  Reference: Paul Bryan’s article on UXmatters in October, 2011.
     http://www.uxmatters.com/mt/archives/2011/10/7-ingredients-of-a-successful-ux-strategy.php
Lean start-up/lean UX movement	





        Reference:: Lean Startup by Eric Ries http://theleanstartup.com/principles
Agenda	


  •  Why measuring user experience	

  	

  •  What user experience KPIs or metrics to use	

  	

  •  How to communicate user experience metrics	

  •  Best practices and future work
Characteristics of good metrics	

       •  Actionable [1]	

   •  Business alignment [2]	

       •  Accessible	

       •  Honest assessment	

       •  Auditable	

        •  Consistency	

       •  Powerful	

         •  Repeatability and
       •  Low-cost	

            reproducibility	

       •  Easy-to-use	

      •  Actionability	

                              •  Time-series tracking	

                              •  Predictability	

                              •  Peer comparability	



        Reference:	

        [1]. Book by Eric Ries: Lean Start up : http://theleanstartup.com/ 	

        [2]. Book by Forrest Breyfogle: “Integrated Enterprise Excellence Volume II: Business Deployment”
User experience metrics used	


               Task-level                       Product-level
Task success rate                    System Usability Scale (SUS) score
Task easiness rating (SEQ)           Net Promoter Score (NPS)

Task error rate
Task time

Clickstream data
 - First click analysis
 - Heat map
 - Number of clicks

(CogTool) task time and clicks for
optimal path
System Usability Scale (SUS)	

 •      Why chosen?	

        •    The most sensitive post-study questionnaire. [2]	

        •    Free, short, valid, and reliable. [1]	

        •    A single SUS score can be calculated and a grade
             can be assigned. [1]	

        •    Over 500 user studies to be compared with. [1]	

        •    Chose positive version because no significant
             differences found with mixed version but easier for
             people to answer. [2]	

 •      Customized levels for color coding	

        •     Level 0 (poor) : Less than 63	

        •     Level 1(minimally acceptable): 63-72 [63 is the
             lower range of C-]	

        •     Level 2 (good): 73 - 78 [73 is the lower range of
             B-]	

        •     Level 3 (Excellent): 79 - 100 [79 is the lower range
             of A-]     	

	

  Reference: 	

    [1] Jeff Sauro's blog entry: Measuring Usability with the System Usability Scale (SUS)
      http://www.measuringusability.com/sus.php 	

    [2] Book by Jeff Sauro & James Lewis: Quantifying the User Experience: Practical Statistics for User Research.
Net Promoter Score (NPS)	

 •    Why chosen?	

      •    Industry standard and widely popular.	

      •    Benchmark data to compare.	

 •    Customized levels	

      •     Level 0 (Poor): Less than 24%	

      •     Level 1 (Minimally acceptable): 24% - 45%
           [24% is the lower range for computer
           software]	

      •    Level 2 (Good): 46% - 67% [46 is the
           mean for computer software] 	

      •     Level 3 (Excellent): 68% - 100% [68% is
           the upper range for computer software	




  Reference: 	

  [1] NPS website: http://www.netpromoter.com/why-net-promoter/know/
Task success rate	

 •  Why chosen?	

      •    Easy to collect	

      •    Easy to understand	

      •    Popular among the UX community [1]	

 •    Customized levels	

      •     Fail : less than 75% 	

	

      •     Pass: 75% or more 	

	





  Reference: 	

  [1] Jacob Nielsen’s article on usability metrics in Jan. 2001: http://www.useit.com/alertbox/20010121.html
Task ease of use: SEQ (Single Ease Question)	

 •  Why chosen?	

      •    Reliable, sensitive & valid. [1]	

      •    Short, easy to respond to, easy to administer & easy to score [1]	

      •    The secondly most sensitive after-task questions, next to SMEQ, but much
           simpler. [2]	

 •    Customized levels for color coding	

      •     Fail : less than 75% 	

	

      •     Pass: 75% or more	





  Reference: 	

  [1] Jeff Sauro's blog entry: If you could only ask one question, use this one
     http://www.measuringusability.com/blog/single-question.php 	

  [2] Book by Jeff Sauro & James Lewis: Quantifying the User Experience: Practical Statistics for User Research
Summary of user experience metrics chosen 	

                                                                                                       Customized Success
  Metrics	

        Definition	

                  Why chosen	

                     Methods	

                                                                                                            criteria	



                                       •  easy to collect                     •  large scale
               percentage of tasks                                               usability testings   •  Fail : less than 75%
Task success                           •  easy to understand
               that users complete                                            •  small scale          •  Pass: 75% or more
    rate                               •  popular among the UX
               successfully                                                      usability testings
                                          community


                                       •  reliable, sensitive & valid
                                       •  short, easy to respond to, easy     •  large scale
               one standard Single                                               usability testings   •  Fail : less than 75%
Task ease of                              to administer & easy to score
               Ease Question (SEQ)                                            •  small scale          •  Pass: 75% or more
    use                                •  the second most sensitive after-
                                          task questions, next to SMEQ,          usability testings
                                          but much simpler

                                                                                                      •  Poor : Less than 24%
                                                                              •  large scale          •  Minimally acceptable :
               one standard            •  industry standard and widely
Net Promoter                                                                     usability testings      24% - 45%
               recommendation             popular
Score (NPS)                                                                   •  large scale          •  Good : 46% - 67%
               question                •  benchmark data to compare
                                                                                 surveys              •  Excellent : 68% -
                                                                                                         100%

                                       •  free, short, valid, and reliable.                           •  Poor : Less than 63
                                       •  a single SUS score can be                                   •  Minimally acceptable :
               a list of 10 standard      calculated and a grade can be       •  large scale          63-72
  System       ease of use questions      assigned.                              usability testings   •  Good : 73 – 78
 Usability     (positive version)      •  over 500 user studies to be         •  large scale          •  Excellent : 79 - 100
Scale (SUS)                               compared with.                         surveys
                                       •  the most sensitive post-study
                                          questionnaire.
Clickstream data	

 •  Good to have	

      •    Yet another way to visually illustrate the
           problems which are shown in other metrics
           such as task success rate and easiness ratings.	

      •    Navigation path is very helpful to have. 	

 •    But hard to implement & analyze in
      UserZoom	

      •    Approach 1: asking participants to install a
           plugin, which reduces the participation rate.	

      •    Approach 2: inserting a line of javascript code
           on every page of the website, which is hard
           to achieve.
Task time	

 •  Good for benchmark comparison 	

    •  between prototypes/releases.	

    •  with competitors.	

 •  But hard to be accurate	

    •  For large-scale studies, people might be multi-tasking.	

    •  For small-scale studies, people might be asked to think aloud.	

    •  You don’t know how hard people have tried the task.
Agenda	


  •  Why measuring user experience	

  	

  •  What user experience KPIs or metrics to use	

  	

  •  How to communicate user experience metrics	

  •  Best practices and future work
Example task performance table
  Example task performance table	

                                                                                      Fake data for illustration purposes
           Task performance data summary
           (% means percentage of all participants for each task)
                     Successful with the Considered task
                                                                  Highlights of the problem                    Recommendation
                            task                   easy
              Task 1               90%                     95%            Easy task                           None
              Task 2               61%                     42%            Hard to find the action button      xx
              Task 3               55%                     30%            Too many clicks                     xx
              Task 4               85%                     90%            Relatively easy task                xx
Task performance data summary
(% means percentage of all participants for each task)                    Task performance data summary
             Successful with the task              Considered task easy   (% means percentage of all participants for each task)
                                                                                       Successful with the task              Considered task easy
           Study 1 Study 2 Change Study 1 Study 2 Change                             Study 1 Study 2 Change Study 1 Study 2 Change
  Task 1    89%         84%          -5.1%       60%      63%        3.0%   Task 1    89%         84%          -5.1%       60%      63%        3.0%
  Task 2    89%         70%         -19.0%       65%      60%       -5.2%   Task 2    89%         70%         -19.0%       65%      60%       -5.2%
  Task 3    62%         55%          -6.8%       75%      87%       12.0%   Task 3    62%         55%          -6.8%       75%      87%       12.0%
  Task 4    71%         90%         19.0%        56%      80%       24.0%   Task 4    71%         90%         19.0%        56%      85%       29.0%

                                              Benchmark comparison between two
                                                studies on the same tasks
                                                     © Copyright IBM Corporation 2012
User experience scorecard examples	

                                                        High priority but poor
                                                        usability tasks should
                                                          be the focus area	





     Core use cases that has
      the most failed tasks
       should be the focus	




               Fake data for illustration purposes
Illustrated task flow by task performance data	





           Fake data for illustration purposes
Portfolio dashboard and health index	




                                  Details on how to
                                 calculate the health
                                        index
User story mapping (in exploration)	





   Reference: 	

                                                            Incorporate tasks and
                                                                             metrics into the agile
   [1] J User story mapping presentation by Steve Rogalsky	

                development process	


   http://www.slideshare.net/SteveRogalsky/user-story-mapping-11507966
Agenda	


  •  Why measuring user experience	

  	

  •  What user experience KPIs or metrics to use	

  	

  •  How to communicate user experience metrics	

  •  Best practices and future work
Best practices	

 •    Great executive buy-in on the user experience metrics and
      dashboard. 	

 •    Focus on core use cases and top tasks to evaluate.	

 •    Use standardized questions/metrics for peer comparability.	

 •    Try random sampling instead of convenient sampling when
      recruiting participants for large-scale usability testings. 	

 •    Visualization is the key to effective communication. 	

 •    KPIs/metrics catch people’s attention, but qualitative information
      provides the insights.
Future work	

 •     Align UX metrics with business goals.	

       •    User experience vs. customer experience	

 •     Apply metrics on interaction models and scenarios.	

 •     Communicate UX metrics to influence product strategy.	

 •     Incorporate UX metrics in the agile development process.	

 •     Collaborate with analytics team to gather metrics such as
       Engagement/Adoption/Retention and cohort analysis.	

 •     How do we measure usefulness (vs. ease of use)?
Thank You!	



Questions

Contenu connexe

En vedette

Lisboa- Paintings By J. B. Durão
Lisboa- Paintings  By J. B. DurãoLisboa- Paintings  By J. B. Durão
Lisboa- Paintings By J. B. Durãomaditabalnco
 
Xotelia - How to convert your visitors into bookers
Xotelia - How to convert your visitors into bookersXotelia - How to convert your visitors into bookers
Xotelia - How to convert your visitors into bookersJeffrey Messud
 
Localized ic ts urban commons
Localized ic ts urban commonsLocalized ic ts urban commons
Localized ic ts urban commonsLabGov
 
Hot Trends in Digital Marketing
Hot Trends in Digital MarketingHot Trends in Digital Marketing
Hot Trends in Digital MarketingEric Enge
 
Scottish Public Opinion Monitor January 2015 - Holyrood Voting Intention
Scottish Public Opinion Monitor January 2015 - Holyrood Voting IntentionScottish Public Opinion Monitor January 2015 - Holyrood Voting Intention
Scottish Public Opinion Monitor January 2015 - Holyrood Voting IntentionIpsos UK
 
NYU Tandon Online MS in Industrial Engineering Info Webinar
NYU Tandon Online MS in Industrial Engineering Info WebinarNYU Tandon Online MS in Industrial Engineering Info Webinar
NYU Tandon Online MS in Industrial Engineering Info WebinarNYU Tandon Online
 
Welcome to group 2 internet
Welcome to group 2 internetWelcome to group 2 internet
Welcome to group 2 internetcutrikaafriana
 
서울시 소셜방송 라이브서울
서울시 소셜방송 라이브서울서울시 소셜방송 라이브서울
서울시 소셜방송 라이브서울simrc
 

En vedette (11)

cert
certcert
cert
 
Lisboa- Paintings By J. B. Durão
Lisboa- Paintings  By J. B. DurãoLisboa- Paintings  By J. B. Durão
Lisboa- Paintings By J. B. Durão
 
Xotelia - How to convert your visitors into bookers
Xotelia - How to convert your visitors into bookersXotelia - How to convert your visitors into bookers
Xotelia - How to convert your visitors into bookers
 
Newspapers online
Newspapers onlineNewspapers online
Newspapers online
 
Localized ic ts urban commons
Localized ic ts urban commonsLocalized ic ts urban commons
Localized ic ts urban commons
 
Hot Trends in Digital Marketing
Hot Trends in Digital MarketingHot Trends in Digital Marketing
Hot Trends in Digital Marketing
 
Scottish Public Opinion Monitor January 2015 - Holyrood Voting Intention
Scottish Public Opinion Monitor January 2015 - Holyrood Voting IntentionScottish Public Opinion Monitor January 2015 - Holyrood Voting Intention
Scottish Public Opinion Monitor January 2015 - Holyrood Voting Intention
 
NYU Tandon Online MS in Industrial Engineering Info Webinar
NYU Tandon Online MS in Industrial Engineering Info WebinarNYU Tandon Online MS in Industrial Engineering Info Webinar
NYU Tandon Online MS in Industrial Engineering Info Webinar
 
Welcome to group 2 internet
Welcome to group 2 internetWelcome to group 2 internet
Welcome to group 2 internet
 
Beer is good!
Beer is good!Beer is good!
Beer is good!
 
서울시 소셜방송 라이브서울
서울시 소셜방송 라이브서울서울시 소셜방송 라이브서울
서울시 소셜방송 라이브서울
 

Dernier

Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...itnewsafrica
 
React Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkReact Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkPixlogix Infotech
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024TopCSSGallery
 
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical InfrastructureVarsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructureitnewsafrica
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationKnoldus Inc.
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesBernd Ruecker
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Karmanjay Verma
 

Dernier (20)

Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
 
React Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkReact Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App Framework
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024
 
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical InfrastructureVarsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog Presentation
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architectures
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#
 

KPI-UX-BostonUPA-20120507

  • 1. Best Practices for Defining, Evaluating & Communicating Key Performance Indicators (KPIs) of User Experience Meng Yang User Experience Researcher IBM Software Group
  • 2. Agenda •  Why measuring user experience •  What user experience KPIs or metrics to use •  How to communicate user experience metrics •  Best practices and future work
  • 3. If you cannot measure it, you cannot improve it. -Lord Kelvin
  • 4. Design: intuition-driven or data-driven? Reference: Metrics-driven design by Joshua porter http://www.slideshare.net/andrew_null/metrics-driven-design-by-joshua-porter
  • 5. 5 Reasons why metrics are a designer’s best friend •  Metrics reduce arguments based on opinion. •  Metrics give you answers about what really works. •  Metrics show you where you’re strong as a designer. •  Metrics allow you to test anything you want. •  Clients or stakeholders love metrics. Reference: Metrics-driven design by Joshua porter in Mar. 2011 http://www.slideshare.net/andrew_null/metrics-driven-design-by-joshua-porter
  • 6. 7 Ingredients of a successful UX strategy •  Business strategy •  Competitive benchmarking •  Web analytics •  Behavioral segmentation, or personas, and usage scenarios •  Interaction modeling •  Prioritization of new features and functionality •  Social / mobile / local Reference: Paul Bryan’s article on UXmatters in October, 2011. http://www.uxmatters.com/mt/archives/2011/10/7-ingredients-of-a-successful-ux-strategy.php
  • 7. Lean start-up/lean UX movement Reference:: Lean Startup by Eric Ries http://theleanstartup.com/principles
  • 8. Agenda •  Why measuring user experience •  What user experience KPIs or metrics to use •  How to communicate user experience metrics •  Best practices and future work
  • 9. Characteristics of good metrics •  Actionable [1] •  Business alignment [2] •  Accessible •  Honest assessment •  Auditable •  Consistency •  Powerful •  Repeatability and •  Low-cost reproducibility •  Easy-to-use •  Actionability •  Time-series tracking •  Predictability •  Peer comparability Reference: [1]. Book by Eric Ries: Lean Start up : http://theleanstartup.com/ [2]. Book by Forrest Breyfogle: “Integrated Enterprise Excellence Volume II: Business Deployment”
  • 10. User experience metrics used Task-level Product-level Task success rate System Usability Scale (SUS) score Task easiness rating (SEQ) Net Promoter Score (NPS) Task error rate Task time Clickstream data - First click analysis - Heat map - Number of clicks (CogTool) task time and clicks for optimal path
  • 11. System Usability Scale (SUS) •  Why chosen? •  The most sensitive post-study questionnaire. [2] •  Free, short, valid, and reliable. [1] •  A single SUS score can be calculated and a grade can be assigned. [1] •  Over 500 user studies to be compared with. [1] •  Chose positive version because no significant differences found with mixed version but easier for people to answer. [2] •  Customized levels for color coding •  Level 0 (poor) : Less than 63 •  Level 1(minimally acceptable): 63-72 [63 is the lower range of C-] •  Level 2 (good): 73 - 78 [73 is the lower range of B-] •  Level 3 (Excellent): 79 - 100 [79 is the lower range of A-] Reference: [1] Jeff Sauro's blog entry: Measuring Usability with the System Usability Scale (SUS) http://www.measuringusability.com/sus.php [2] Book by Jeff Sauro & James Lewis: Quantifying the User Experience: Practical Statistics for User Research.
  • 12. Net Promoter Score (NPS) •  Why chosen? •  Industry standard and widely popular. •  Benchmark data to compare. •  Customized levels •  Level 0 (Poor): Less than 24% •  Level 1 (Minimally acceptable): 24% - 45% [24% is the lower range for computer software] •  Level 2 (Good): 46% - 67% [46 is the mean for computer software] •  Level 3 (Excellent): 68% - 100% [68% is the upper range for computer software Reference: [1] NPS website: http://www.netpromoter.com/why-net-promoter/know/
  • 13. Task success rate •  Why chosen? •  Easy to collect •  Easy to understand •  Popular among the UX community [1] •  Customized levels •  Fail : less than 75% •  Pass: 75% or more Reference: [1] Jacob Nielsen’s article on usability metrics in Jan. 2001: http://www.useit.com/alertbox/20010121.html
  • 14. Task ease of use: SEQ (Single Ease Question) •  Why chosen? •  Reliable, sensitive & valid. [1] •  Short, easy to respond to, easy to administer & easy to score [1] •  The secondly most sensitive after-task questions, next to SMEQ, but much simpler. [2] •  Customized levels for color coding •  Fail : less than 75% •  Pass: 75% or more Reference: [1] Jeff Sauro's blog entry: If you could only ask one question, use this one http://www.measuringusability.com/blog/single-question.php [2] Book by Jeff Sauro & James Lewis: Quantifying the User Experience: Practical Statistics for User Research
  • 15. Summary of user experience metrics chosen Customized Success Metrics Definition Why chosen Methods criteria •  easy to collect •  large scale percentage of tasks usability testings •  Fail : less than 75% Task success •  easy to understand that users complete •  small scale •  Pass: 75% or more rate •  popular among the UX successfully usability testings community •  reliable, sensitive & valid •  short, easy to respond to, easy •  large scale one standard Single usability testings •  Fail : less than 75% Task ease of to administer & easy to score Ease Question (SEQ) •  small scale •  Pass: 75% or more use •  the second most sensitive after- task questions, next to SMEQ, usability testings but much simpler •  Poor : Less than 24% •  large scale •  Minimally acceptable : one standard •  industry standard and widely Net Promoter usability testings 24% - 45% recommendation popular Score (NPS) •  large scale •  Good : 46% - 67% question •  benchmark data to compare surveys •  Excellent : 68% - 100% •  free, short, valid, and reliable. •  Poor : Less than 63 •  a single SUS score can be •  Minimally acceptable : a list of 10 standard calculated and a grade can be •  large scale 63-72 System ease of use questions assigned. usability testings •  Good : 73 – 78 Usability (positive version) •  over 500 user studies to be •  large scale •  Excellent : 79 - 100 Scale (SUS) compared with. surveys •  the most sensitive post-study questionnaire.
  • 16. Clickstream data •  Good to have •  Yet another way to visually illustrate the problems which are shown in other metrics such as task success rate and easiness ratings. •  Navigation path is very helpful to have. •  But hard to implement & analyze in UserZoom •  Approach 1: asking participants to install a plugin, which reduces the participation rate. •  Approach 2: inserting a line of javascript code on every page of the website, which is hard to achieve.
  • 17. Task time •  Good for benchmark comparison •  between prototypes/releases. •  with competitors. •  But hard to be accurate •  For large-scale studies, people might be multi-tasking. •  For small-scale studies, people might be asked to think aloud. •  You don’t know how hard people have tried the task.
  • 18. Agenda •  Why measuring user experience •  What user experience KPIs or metrics to use •  How to communicate user experience metrics •  Best practices and future work
  • 19. Example task performance table Example task performance table Fake data for illustration purposes Task performance data summary (% means percentage of all participants for each task) Successful with the Considered task Highlights of the problem Recommendation task easy Task 1 90% 95% Easy task None Task 2 61% 42% Hard to find the action button xx Task 3 55% 30% Too many clicks xx Task 4 85% 90% Relatively easy task xx Task performance data summary (% means percentage of all participants for each task) Task performance data summary Successful with the task Considered task easy (% means percentage of all participants for each task) Successful with the task Considered task easy Study 1 Study 2 Change Study 1 Study 2 Change Study 1 Study 2 Change Study 1 Study 2 Change Task 1 89% 84% -5.1% 60% 63% 3.0% Task 1 89% 84% -5.1% 60% 63% 3.0% Task 2 89% 70% -19.0% 65% 60% -5.2% Task 2 89% 70% -19.0% 65% 60% -5.2% Task 3 62% 55% -6.8% 75% 87% 12.0% Task 3 62% 55% -6.8% 75% 87% 12.0% Task 4 71% 90% 19.0% 56% 80% 24.0% Task 4 71% 90% 19.0% 56% 85% 29.0% Benchmark comparison between two studies on the same tasks © Copyright IBM Corporation 2012
  • 20. User experience scorecard examples High priority but poor usability tasks should be the focus area Core use cases that has the most failed tasks should be the focus Fake data for illustration purposes
  • 21. Illustrated task flow by task performance data Fake data for illustration purposes
  • 22. Portfolio dashboard and health index Details on how to calculate the health index
  • 23. User story mapping (in exploration) Reference: Incorporate tasks and metrics into the agile [1] J User story mapping presentation by Steve Rogalsky development process http://www.slideshare.net/SteveRogalsky/user-story-mapping-11507966
  • 24. Agenda •  Why measuring user experience •  What user experience KPIs or metrics to use •  How to communicate user experience metrics •  Best practices and future work
  • 25. Best practices •  Great executive buy-in on the user experience metrics and dashboard. •  Focus on core use cases and top tasks to evaluate. •  Use standardized questions/metrics for peer comparability. •  Try random sampling instead of convenient sampling when recruiting participants for large-scale usability testings. •  Visualization is the key to effective communication. •  KPIs/metrics catch people’s attention, but qualitative information provides the insights.
  • 26. Future work •  Align UX metrics with business goals. •  User experience vs. customer experience •  Apply metrics on interaction models and scenarios. •  Communicate UX metrics to influence product strategy. •  Incorporate UX metrics in the agile development process. •  Collaborate with analytics team to gather metrics such as Engagement/Adoption/Retention and cohort analysis. •  How do we measure usefulness (vs. ease of use)?

Notes de l'éditeur

  1. UXR working in IBM Lotus for 7 years. Working on various products including Notes, ST, SmartCloud, but mostly social computing. In recent years, we have more focus on quantitative user experience research. Conducting large-scale unmoderated usability studies, and large-scale surveys. And also build our dashboard. Good communication tool. Still do lots of iterative usability testings, and user interviews. More triangulation of the quantitative and qualitative research.
  2. Lord Kelvin was a physicist Qualitative vs. quantiative data in UX.
  3. Art or science? Apple vs. Google…
  4. Importance of measurement here
  5. Thought-leaders embraces metrics.
  6. Thought-leaders embraces metrics.
  7. Thought-leaders embraces metrics.