If you stepped into a usability test in the early days of the discipline, it would have looked very much like a formal experiment in a psychology lab. Subjects were asked to carry out tests and the data were analysed using statistical methods. All that was missing was the white lab coat. Unsurprisingly, these tests were expensive to carry out and complicated to run, leading to the development of a breakaway discipline: discount usability.
Discount usability introduced three key techniques that aimed to simplify methods of data collection:
- Thinking-aloud usability tests;
- Low-fidelity prototypes;
- Heuristic evaluation.
These techniques work best as part of an iterative design cycle where usability problems are found and fixed and then the next "throwaway" prototype is again quickly tested with a small number of participants. This approach truly revolutionised the field. It would now be difficult to find a usability practitioner who did not use most of these techniques during an assignment.
In fact, the pendulum has now swung so far that many people are dismissive of traditional, lab-based testing.
Limitations of discount usability
But discount methods fall short in one important area. They cannot answer the question, "How usable is this system?" For example, imagine you are in the business of developing a new mobile phone. Your marketing department have identified a key product attribute: the ease with which the user can enter and send a text message. Your new phone needs to beat the competition. Which method will you choose?
Discount methods will help you spot usability problems during the early design phase, but they will not be able to show how you stack up against competitor products. This type of usability evaluation — testing against usability metrics — requires a more traditional, lab-based approach. For example, when it comes to measuring the ease with which the user can enter and send a text message, we might want to put the new phone and its competitors in a head-to-head test. We could then collect robust measures such as the time it takes to complete the task, the number of errors made (such as typing or menu navigation errors) as well as some subjective ratings from participants.
Usability metrics are precise, quality measures used to evaluate the system. Their purpose is to produce a system that is neither under- nor over-engineered. To understand why usability metrics are useful, remember the joke about the guys out camping disturbed by a bear. As one of them puts on his trainers, the other says: "What are you doing, you’ll never outrun a bear!" The first one says: "I don’t need to outrun the bear. I just need to outrun you". Usability metrics help you outrun the competition.
We recommend collecting usability metrics in the three Es: effectiveness, efficiency and emotion.
The accuracy and completeness with which users achieve specified goals.
The accuracy and completeness of goals achieved in relation to resources.
Freedom from discomfort, and positive attitudes towards the use of the system.
Effectiveness measures of usability
Effectiveness refers to the accuracy and completeness with which users can achieve their goals. Typical measures include:
- Number of power tasks performed;
- Percentage of relevant functions used;
- Percentage of tasks completed successfully on first attempt;
- Number of persistent errors;
- Number of errors per unit of time;
- Per cent of users able to successfully complete the task;
- Number of errors made performing specific tasks;
- Number of requests for assistance accomplishing task;
- Objective measure of quality of output;
- Objective measure of quantity of output;
- Per cent of users who can carry out key tasks without reading the manual.
Efficiency measures of usability
Efficiency refers to the amount of effort users need to put in to achieve their goals. Typical measures include:
- Time to execute a particular set of instructions;
- Time taken on first attempt;
- Time to perform a particular task;
- Time to perform a particular task after a specified period of time away from the product;
- Time to perform task compared to an expert;
- Time to learn to criterion;
- Time to achieve expert performance;
- Number of key presses taken to achieve task;
- Time spent on correcting errors;
- Number of icons remembered after task completion;
- Time to install a product;
- Per cent of time spent using the manual;
- Time spent relearning functions.
Emotional measures of usability
Emotion refers to how users feel about the system. Typical measures include:
- Ratio of positive to negative adjectives used to describe the product;
- Per cent of customers that rate the product as "more satisfying" than a previous product;
- Rate of voluntary use.
- Per cent of customers who feel "in control" of the product;
- Customer rating on a 7-point scale anchored with "makes me more/less productive";
- Per cent of customers who would recommend it to a friend after two hours’ use;
- Per cent of customers that rate the product as "easier to use" than a key competitor.
Push back the pendulum
If you are developing a new product or a new piece of software, be sure to continue to use discount methods to eradicate usability problems. But when you need to answer the question, "How usable is this system", push back the pendulum and collect usability metrics.
About the author
Dr. David Travis (@userfocus on Twitter) holds a BSc and a PhD in Psychology and he is a Chartered Psychologist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, you'll love his online user experience training course.
Online training in user experience
Get a job in UX and improve your web site's UI design with these in-depth, hands-on user experience training courses. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
If you liked this, try
Experience shows that participants are reluctant to be critical of a system, no matter how difficult they found the tasks. This article describes a guided interview technique that overcomes this problem. Measuring satisfaction: Beyond the usability questionnaire.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
- Feb 2: Field visits and user interviews: 7 frequently asked questions
- Jan 6: 20 things you can do this year to improve your user's experience
- Dec 1: The 7 Deadly Sins of User Research
- Nov 3: 60 ways to understand user needs that aren't focus groups or surveys
- Oct 6: Conducting an effective stakeholder interview
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 11 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 8 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 3 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 7 articles tagged iterative design
- 3 articles tagged layout
- 1 article tagged legal
- 10 articles tagged metrics
- 3 articles tagged mobile
- 6 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 7 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 14 articles tagged selling usability
- 12 articles tagged standards
- 36 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 20 articles tagged tools
- 45 articles tagged usability testing
- 3 articles tagged user manual