The state we’re in
We all agree that it’s important to create a “good” user experience, but when you’re in the heat of a design project, how do you know if you’re on track?
Traditional, lab-based usability testing is a good way to find usability issues that need to be fixed — but they're not the best way to monitor the progress of a design over time. A sample size of 5 participants — typical in a lab-based usability test — is quick to run but has too few participants to give you the kind of robust data you need to make critical business decisions.
You can increase reliability by testing larger participant samples, but this slows development down as teams wait for your results. Participants need to be recruited and scheduled, lab space needs to be reserved, participants need to be tested and data needs to be analysed. As well as taking time, it makes usability testing a real drain on the project manager’s budget.
This has led Jakob Nielsen to claim that, “Metrics are expensive and are a poor use of typically scarce usability resources.”
But this causes a problem for people who manage design and development projects.
If project managers can’t measure it, it can’t be monitored. And if it’s not monitored, it gets ignored. This is one reason why many usability tests get delayed until the end of development — but by that time, it’s too late to make any significant changes to enhance the user experience.
Surely there’s a better way.
Indeed there is. We can satisfy the needs of the project manager, the budget holder and the statistician by using remote usability testing tools to run a metrics-based test. With these tools we’ll also be able to run several usability tests throughout the project and we'll find it's cheaper than running a large, lab-based test at the end of design.
But we’re getting ahead of ourselves. First, let’s take a look at some of the benefits of collecting UX metrics. Then we’ll discuss how to create bullet-proof measures you can use on your own projects. Finally, we’ll return to the issue of how you collect the data.
Why UX metrics?
UX metrics provide a way for you to see how your current design measures up to what your customers (and the business) needs. In practice, we characterise user experience by measuring the user performance on a basket of test tasks.
UX metrics help you:
- Make design decisions. When you are guided by a clear set of UX metrics, decisions about features, functionality and resource allocation can be made more quickly, consistently and rationally. UX metrics prevent feature creep because they help your team resist diversions and keep everyone focused on customer and business priorities.
- Measure progress. UX metrics provide an objective way to track your progress on agile projects and help you decide if your system is ready to move from one sprint to the next. They can also be used in traditional waterfall development methods to judge if a design is ready to move from one life cycle phase to the next.
- Communicate with the project team and senior management. UX metrics create a framework for communicating progress toward the project’s goals.
Creating UX metrics
There are five steps to creating a solid UX metric:
- Identify the red routes
- Create a user story
- Define success and how you’ll measure it
- Assign values to the criteria
- Monitor throughout development
Let’s look at each of these steps with a worked example.
Identify the red routes
UX metrics need to focus on the critical user journeys with the system: we call these red routes.
Most systems, even quite complex ones, usually have only a handful of critical tasks. This doesn’t mean that the system will support only those tasks: it simply means that these tasks are the essential ones for the business and for users. So it makes sense to use these red routes to track progress.
There are many ways to identify the red routes for a system. For example, for a web site you could:
- Examine competitor sites: what tasks are commonly supported on similar sites to yours?
- Identify the top 20 most-visited pages: what are most people doing?
- Analyse top search terms: what are people looking for in your site?
- Speak to support staff: what are the most common requests for help?
- Brainstorm a potential (long) list of tasks and survey users to find the top five.
- Use an intercept survey on your web site (“Why did you visit the site today?”)
For example, let’s say that we’re developing a car satnav system. Our initial, long list of tasks might include:
- Plan an itinerary
- Get live traffic information
- Find an alternative route
- Plan a route
- Connect a phone via Bluetooth
- Advanced lane guidance
- Change the voice
- Navigate home
- Add a favourite location
- Set up voice commands
- Change the map colours
- Speak street names
- Change the display brightness
- Set flight mode
- Set route planning preferences
We’re looking for tasks that are carried out by most or all of the people most or all of the time. This helps us prioritise the list and arrive at a smaller set of tasks: the red routes. For example, these might be:
- Plan a route
- Navigate home
- Find an alternative route
- Add a favourite
Create a user story
Our next step is to create a user story so we can think about the context of the task.
User stories are a key component of agile development, but you don’t need to be using agile to benefit from them. User stories have a particular structure:
“As a user, I want to… so that I can…”
Traditionally, these are written on index cards, so you’ll hear people in agile teams talk about ‘story cards’.
We like Anders Ramsay’s take on user stories where the persona “narrates’ the story, because this ensures the scenario is fully grounded. Now, instead of thinking of generic segments (like ‘holidaymaker’ or ‘sales rep’ as users of a satnav system), we’re thinking of the needs and goals of a specific persona. So for example, for the “Find an alternative route” red route we might write:
Justin says: “I want to find an alternative route so that I can avoid motorways”
…where Justin is a 50-year-old holidaymaker who uses his satnav intermittently and has low technology experience.
Define success and how you’ll measure it
In this step, we need to define what success means on this task. This is where our previous hard work, thinking of red routes and user stories, becomes critical.
Without those earlier steps, if you try to define a successful user experience, the chances are that you’ll come up with generic statements like “easy to use”, “enjoyable” or “quick to learn”. It’s not that these aren’t worthy design goals, it’s that they are simply too abstract to help you measure the user experience.
But with our earlier investment in red routes and user stories, we can now decide what success means for those specific tasks. So for example, we can now talk about red route-specific measures, such as task success, time taken and overall satisfaction. For our specific example, we might define success as the percentage of people who successfully manage to calculate a motorway-free route, measured by usability testing.
Note that although we’ve been talking about a single UX metric for simplicity, a typical system is likely to have a handful of UX metrics. These will cover each of the red routes for the system.
Assign values to the criteria
Deciding on the precise criteria to use for the UX metric requires you to have some benchmark to compare it with. This could be a competitor system or it could be the performance of the previous product. Without these data you can’t say if the current design is an improvement over the previous one.
If your system is truly novel, then try thinking about that ultimate business metric: revenue. How much money do you expect to make from this product? What difference in revenue might arise from a success rate of (say) 70% versus 80%? For more on this, read Philip Hodgson's related article, Making usability metrics count.
When setting these values, it’s useful to consider both ‘target’ values and ‘minimum’ values. So for example, if competitor systems are scoring at 70%, we may set our target value at 80% with a minimum acceptable value of 70%.
Monitor throughout development
The final step is to monitor progress throughout design but we need to achieve this without the expense of large sample, lab-based usability testing.
For tracking metrics, remote, on-line usability tests are ideal, because they’re cheap to run and quick to set up. For example, at the beginning of the project, a simple web-based usability test, such as “first click” testing on a screen shot, may be sufficient. (In the past, we’ve used Verify for this, but there are many companies offering this kind of service).
You’ll find that a test can be set up in less than an hour, with results available in a day or so. Quicker, in fact, than holding a meeting to discuss the design with the team (with the added benefit that any design changes that result will be based on data rather than opinion). Sample sizes of over 100 participants are easy to achieve and will ensure that the data you collect can be generalised. (Be sure to preface your test with a short screener to make sure participants are representative).
As you move towards a more fleshed-out system, you can monitor progress with other remote testing tools, such as benchmark tests, where participants complete an end-to-end task with a design that's more functionally rich. (Disclosure: we run our own managed service). The important thing is to test early and often, towards the end of each sprint, to measure progress.
For your management report, we’re again after something lightweight with the minimum of documentation, so a simple table or graphic like this works fine.
|Red Route||User story||Measure||Metric||Status as of Sprint 3|
|Plan a route||Justin says: “I want to find an alternative route so that I can avoid motorways”||Percentage of people who successfully manage to calculate a motorway-free route, measured by usability testing||Minimum: 70% Target: 80%||73% (Below target)|
This is the strength of metrics-based testing: it gives us the what of usability.
But to discover how get the score up to 80%, we need to understand why participants are struggling. And that leads us to an important final point.
Metrics-based versus lab-based usability tests
Metrics-based usability tests aren’t an alternative to traditional lab-based tests: they just answer different questions. Like an A/B test that can prove one design is better than another but can't say why, it's often hard to find out why a participant has failed on a metrics-based test. You could ask your online participants why, with a post-test survey, but there's little benefit because people are poor at introspecting into their own behaviour.
In my experience, working with a range of companies who use a variety of development methods, the most successful teams combine large sample, unmoderated usability tests to capture the what with small sample usability tests to understand the why.
Until now, metrics-based testing has played a minor role in user centred design projects when compared with lab-based testing. But the current avalanche of cheap, quick and reliable online tools will surely reverse that trend.
About the author
Dr. David Travis (@userfocus on Twitter) is a User Experience Strategist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, why not join the thousands of other people taking his free online user experience course?
Foundation Certificate in UX
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most recent articles
Our most commented articles
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 16 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 17 articles tagged ethnography
- 14 articles tagged expert review
- 2 articles tagged fitts law
- 4 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 11 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 11 articles tagged iterative design
- 2 articles tagged legal
- 11 articles tagged metrics
- 3 articles tagged mobile
- 8 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 17 articles tagged selling usability
- 12 articles tagged standards
- 47 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 56 articles tagged usability testing
- 3 articles tagged user manual