Get hands-on practice in all the key areas of UX and prepare for the BCS Foundation Certificate.
User experience metrics are measures that help you assess how your design stacks up against the needs of your customers and the needs of your business. Lab-based methods of collecting UX metrics are too slow and expensive to be part of most design projects, especially those using agile methodologies. But with online usability testing tools, regular user experience benchmarking is now cheap and quick to carry out.
We all agree that it’s important to create a “good” user experience, but when you’re in the heat of a design project, how do you know if you’re on track?
Traditional, lab-based usability testing is a good way to find usability issues that need to be fixed — but they're not the best way to monitor the progress of a design over time. A sample size of 5 participants — typical in a lab-based usability test — is quick to run but has too few participants to give you the kind of robust data you need to make critical business decisions.
You can increase reliability by testing larger participant samples, but this slows development down as teams wait for your results. Participants need to be recruited and scheduled, lab space needs to be reserved, participants need to be tested and data needs to be analysed. As well as taking time, it makes usability testing a real drain on the project manager’s budget.
This has led Jakob Nielsen to claim that, “Metrics are expensive and are a poor use of typically scarce usability resources.”
But this causes a problem for people who manage design and development projects.
If project managers can’t measure it, it can’t be monitored. And if it’s not monitored, it gets ignored. This is one reason why many usability tests get delayed until the end of development — but by that time, it’s too late to make any significant changes to enhance the user experience.
Surely there’s a better way.
Indeed there is. We can satisfy the needs of the project manager, the budget holder and the statistician by using remote usability testing tools to run a metrics-based test. With these tools we’ll also be able to run several usability tests throughout the project and we'll find it's cheaper than running a large, lab-based test at the end of design.
But we’re getting ahead of ourselves. First, let’s take a look at some of the benefits of collecting UX metrics. Then we’ll discuss how to create bullet-proof measures you can use on your own projects. Finally, we’ll return to the issue of how you collect the data.
UX metrics provide a way for you to see how your current design measures up to what your customers (and the business) needs. In practice, we characterise user experience by measuring the user performance on a basket of test tasks.
UX metrics help you:
There are five steps to creating a solid UX metric:
Let’s look at each of these steps with a worked example.
UX metrics need to focus on the critical user journeys with the system: we call these red routes.
Most systems, even quite complex ones, usually have only a handful of critical tasks. This doesn’t mean that the system will support only those tasks: it simply means that these tasks are the essential ones for the business and for users. So it makes sense to use these red routes to track progress.
There are many ways to identify the red routes for a system. For example, for a web site you could:
For example, let’s say that we’re developing a car satnav system. Our initial, long list of tasks might include:
We’re looking for tasks that are carried out by most or all of the people most or all of the time. This helps us prioritise the list and arrive at a smaller set of tasks: the red routes. For example, these might be:
Our next step is to create a user story so we can think about the context of the task.
User stories are a key component of agile development, but you don’t need to be using agile to benefit from them. User stories have a particular structure:
“As a user, I want to' so that I can'”
Traditionally, these are written on index cards, so you’ll hear people in agile teams talk about ‘story cards’.
We like Anders Ramsay’s take on user stories where the persona “narrates’ the story, because this ensures the scenario is fully grounded. Now, instead of thinking of generic segments (like ‘holidaymaker’ or ‘sales rep’ as users of a satnav system), we’re thinking of the needs and goals of a specific persona. So for example, for the “Find an alternative route” red route we might write:
Justin says: “I want to find an alternative route so that I can avoid motorways”
…where Justin is a 50-year-old holidaymaker who uses his satnav intermittently and has low technology experience.
In this step, we need to define what success means on this task. This is where our previous hard work, thinking of red routes and user stories, becomes critical.
Without those earlier steps, if you try to define a successful user experience, the chances are that you’ll come up with generic statements like “easy to use”, “enjoyable” or “quick to learn”. It’s not that these aren’t worthy design goals, it’s that they are simply too abstract to help you measure the user experience.
But with our earlier investment in red routes and user stories, we can now decide what success means for those specific tasks. So for example, we can now talk about red route-specific measures, such as task success, time taken and overall satisfaction. For our specific example, we might define success as the percentage of people who successfully manage to calculate a motorway-free route, measured by usability testing.
Note that although we’ve been talking about a single UX metric for simplicity, a typical system is likely to have a handful of UX metrics. These will cover each of the red routes for the system.
Deciding on the precise criteria to use for the UX metric requires you to have some benchmark to compare it with. This could be a competitor system or it could be the performance of the previous product. Without these data you can’t say if the current design is an improvement over the previous one.
If your system is truly novel, then try thinking about that ultimate business metric: revenue. How much money do you expect to make from this product? What difference in revenue might arise from a success rate of (say) 70% versus 80%? For more on this, read Philip Hodgson's related article, Making usability metrics count.
When setting these values, it’s useful to consider both ‘target’ values and ‘minimum’ values. So for example, if competitor systems are scoring at 70%, we may set our target value at 80% with a minimum acceptable value of 70%.
The final step is to monitor progress throughout design but we need to achieve this without the expense of large sample, lab-based usability testing.
For tracking metrics, remote, on-line usability tests are ideal, because they’re cheap to run and quick to set up. For example, at the beginning of the project, a simple web-based usability test, such as “first click” testing on a screen shot, may be sufficient. (In the past, we’ve used Verify for this, but there are many companies offering this kind of service).
You’ll find that a test can be set up in less than an hour, with results available in a day or so. Quicker, in fact, than holding a meeting to discuss the design with the team (with the added benefit that any design changes that result will be based on data rather than opinion). Sample sizes of over 100 participants are easy to achieve and will ensure that the data you collect can be generalised. (Be sure to preface your test with a short screener to make sure participants are representative).
As you move towards a more fleshed-out system, you can monitor progress with other remote testing tools, such as benchmark tests, where participants complete an end-to-end task with a design that's more functionally rich. The important thing is to test early and often, towards the end of each sprint, to measure progress.
For your management report, we’re again after something lightweight with the minimum of documentation, so a simple table or graphic like this works fine.
|Red Route||User story||Measure||Metric||Status as of Sprint 3|
|Plan a route||Justin says: “I want to find an alternative route so that I can avoid motorways”||Percentage of people who successfully manage to calculate a motorway-free route, measured by usability testing||Minimum: 70% Target: 80%||73% (Below target)|
This is the strength of metrics-based testing: it gives us the what of usability.
But to discover how get the score up to 80%, we need to understand why participants are struggling. And that leads us to an important final point.
Metrics-based usability tests aren’t an alternative to traditional lab-based tests: they just answer different questions. Like an A/B test that can prove one design is better than another but can't say why, it's often hard to find out why a participant has failed on a metrics-based test. You could ask your online participants why, with a post-test survey, but there's little benefit because people are poor at introspecting into their own behaviour.
In my experience, working with a range of companies who use a variety of development methods, the most successful teams combine large sample, unmoderated usability tests to capture the what with small sample usability tests to understand the why.
Until now, metrics-based testing has played a minor role in user centred design projects when compared with lab-based testing. But the current avalanche of cheap, quick and reliable online tools will surely reverse that trend.
Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher. If you like his articles, you might enjoy his free online user experience course.
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Our most recent videos
Our most recent articles
copyright © Userfocus 2021.
Get hands-on practice in all the key areas of UX and prepare for the BCS Foundation Certificate.
We can tailor our user research and design courses to address the specific issues facing your development team.
Users don't always know what they want and their opinions can be unreliable — so we help you get behind your users' behaviour.