2. Usability evaluation
• The effectiveness of the interface designs that we have developed
• It cannot be arbitrarily based on a dislike of the interaction design and it
must be justified in some way
• it must let designers know how their designs can be improved.
3. Evaluation, not software testing
• You are dumped on a page somewhere deep in the bowels of a website
• What site is this? (Site ID)
• What page am I on? (Page name)
• What are the major sections of this site? (Sections)
• What are my options at this level? (Local navigation)
• Where am I in the scheme of things? (‘You are here’ indicators)
• How can I search?
4. Goals of evaluation
• To assess the functionality of the system: this is in determining whether
the user can achieve the task that they wish to perform, and this lies primarily
in determining whether it hosts the appropriate functionality.
• Example
• Will the operator be able to handle emergency calls faster than before?
5. Goals of evaluation continued
• Interface effect
• To assess the effect that the interface has on its user(s): this includes factors such as
how easy it is to learn and determining how its implementation matches users’
expectations.
• Example
• Is the ticket machine simple enough for people to use it successfully the first time?
6. Goals of evaluation continued
• To identify specific problems with the system: more specifically, this relates
to negative features of the design which may also include contextual features
which would cause an otherwise successful system to fail.
• Example
• Will the size of the screen target lead to selection errors?
7. Evaluation in software life cycle
• Early on it is used to predict future usability issues
• Later on in the life cycle it may be used to identify difficulties with the
product and to improve it
8. Formative evaluation
• Iterative evaluation
• Usability evaluation is best carried out iteratively throughout the design cycle both for
checking the decisions that have been made and to inform future decisions
9. Summative evaluation
• Due to constraints made on design (time, effort, money) such formative
evaluation is not always possible and the product may only be evaluated at
the end of the design process
• Less favored but commonly practiced
10. DECIDE
• Determining the overall goals to address.
• Exploring the specific questions to be answered.
• Choosing the evaluation paradigm and techniques to answer the questions.
• Identifying the practical issues to be addressed (e.g. selecting participants or
accessing materials).
• Deciding how to deal with the ethical issues.
• Evaluating, interpreting and then presenting the data.
12. Analytical Testing
• Analytical testing: It is based on abstract principles and may draw from
existing data
• Heuristic evaluation
• Consistency inspection
• Cognitive walkthrough
• Formal usability inspection
13. Heuristic evaluation
• ‘Heuristic’, meaning ‘a way of directing your attention fruitfully’
• The main goal of heuristic evaluations is to identify any problems associated
with the design of user interfaces
• Heuristic evaluation requires only one expert, reducing the complexity and
expended time for evaluation
14. 10 quick-to-use design ‘rules’
• Visibility of system status:
• The system should always keep users informed about what is going on, through appropriate feedback
within reasonable time.
• Match between system and the real world:
• The system should speak the user's language, with words, phrases and concepts familiar to the user,
rather than system-oriented terms. Follow real-world conventions, making information appear in a
natural and logical order.
• User control and freedom:
• Users often choose system functions by mistake and will need a clearly marked "emergency exit" to
leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
15. 10 quick-to-use design ‘rules’
• Consistency and standards:
• Users should not have to wonder whether different words, situations, or actions mean the same thing.
Follow platform conventions.
• Error prevention:
• Even better than good error messages is a careful design which prevents a problem from occurring in
the first place. Either eliminate error-prone conditions or check for them and present users with a
confirmation option before they commit to the action.
• Recognition rather than recall:
• Minimize the user's memory load by making objects, actions, and options visible. The user should not
have to remember information from one part of the dialogue to another. Instructions for use of the
system should be visible or easily retrievable whenever appropriate.
16. 10 quick-to-use design ‘rules’
• Flexibility and efficiency of use:
• Accelerators—unseen by the novice user—may often speed up the interaction for the expert user such that the system can cater to both
inexperienced and experienced users. Allow users to tailor frequent actions.
• Aesthetic and minimalist design:
• Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes
with the relevant units of information and diminishes their relative visibility.
• Help users recognize, diagnose, and recover from errors:
• Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
• Help and documentation:
• Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any
such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
17. Consistency inspection
• Color, layout, inputs and outputs. It can also be applied to the associated
training materials and help systems and is useful in determining consistency
across the application
18. Cognitive walkthrough
• the analysis has to focus on the user’s goals and knowledge and whether the
design leads the user to generate the correct goals
• involves the analyst (typically an expert in cognitive psychology) ‘stepping
through’ user actions on the interface and simulating users doing the task
19. Formal usability inspection
• an adversarial courtroom style meeting with a moderator, in which the
evaluators present and discuss weaknesses and strengths of the design in a
formal setting, allowing them to document problems and discuss agreed-
upon solutions.
21. Observation and monitoring
• watching users at work
• the collection of verbal protocols
• software logging
• collection of users’ opinions (in questionnaires and interviews)
22. AEIOU
• A – activities: what are the users doing?
• E – environment: where is it taking place?
• I – interactions: how are the users interfacing with one another?
• O – objects: what are they using?
• U – users: who are they?
23. Ethnography
• collect and analyze data about how people act in natural settings
• A range of methods may be applied as the analyst sees fit and which may involve
observation, in-depth interviews, participation in the activity, just ‘hanging about’,
watching and learning about the people using the system. It is what we call a
‘holistic’ method and does not exclude aspects of system use from the evaluation
• Its aim is not to see the use of the system from the perspective of one person but
to understand the situation from how it affects the whole set of users from their
own perspective
24. Experimentation
• begin by stating a testable hypothesis
• apply a variety of different methods, such as video, audio, and data logging,
to perform a quantitative analysis of data
• examining criteria such as speed of interaction or the number of errors made
25. Criticism on Experimentation
• Testing in a laboratory is very unlike the sort of problems and settings that
users will face in the real world
Notes de l'éditeur
Approach for selecting a usability evaluation method