The state we’re in

We all agree that it’s important to create a “good” user experience, but when you’re in the heat of a design project, how do you know if you’re on track?

Traditional, lab-based usability testing is a good way to find usability issues that need to be fixed — but they're not the best way to monitor the progress of a design over time. A sample size of 5 participants — typical in a lab-based usability test — is quick to run but has too few participants to give you the kind of robust data you need to make critical business decisions.

You can increase reliability by testing larger participant samples, but this slows development down as teams wait for your results. Participants need to be recruited and scheduled, lab space needs to be reserved, participants need to be tested and data needs to be analysed. As well as taking time, it makes usability testing a real drain on the project manager’s budget.

This has led Jakob Nielsen to claim that, “Metrics are expensive and are a poor use of typically scarce usability resources.”

But this causes a problem for people who manage design and development projects.

If project managers can’t measure it, it can’t be monitored. And if it’s not monitored, it gets ignored. This is one reason why many usability tests get delayed until the end of development — but by that time, it’s too late to make any significant changes to enhance the user experience.

Surely there’s a better way.

Indeed there is. We can satisfy the needs of the project manager, the budget holder and the statistician by using remote usability testing tools to run a metrics-based test. With these tools we’ll also be able to run several usability tests throughout the project and we'll find it's cheaper than running a large, lab-based test at the end of design.

But we’re getting ahead of ourselves. First, let’s take a look at some of the benefits of collecting UX metrics. Then we’ll discuss how to create bullet-proof measures you can use on your own projects. Finally, we’ll return to the issue of how you collect the data.

Why UX metrics?

UX metrics provide a way for you to see how your current design measures up to what your customers (and the business) needs. In practice, we characterise user experience by measuring the user performance on a basket of test tasks.

UX metrics help you:

  • Make design decisions. When you are guided by a clear set of UX metrics, decisions about features, functionality and resource allocation can be made more quickly, consistently and rationally. UX metrics prevent feature creep because they help your team resist diversions and keep everyone focused on customer and business priorities.
  • Measure progress. UX metrics provide an objective way to track your progress on agile projects and help you decide if your system is ready to move from one sprint to the next. They can also be used in traditional waterfall development methods to judge if a design is ready to move from one life cycle phase to the next.
  • Communicate with the project team and senior management. UX metrics create a framework for communicating progress toward the project’s goals.

Creating UX metrics

There are five steps to creating a solid UX metric:

  1. Identify the red routes
  2. Create a user story
  3. Define success and how you’ll measure it
  4. Assign values to the criteria
  5. Monitor throughout development

Let’s look at each of these steps with a worked example.

Identify the red routes

UX metrics need to focus on the critical user journeys with the system: we call these red routes.

Most systems, even quite complex ones, usually have only a handful of critical tasks. This doesn’t mean that the system will support only those tasks: it simply means that these tasks are the essential ones for the business and for users. So it makes sense to use these red routes to track progress.

There are many ways to identify the red routes for a system. For example, for a web site you could:

  • Examine competitor sites: what tasks are commonly supported on similar sites to yours?
  • Identify the top 20 most-visited pages: what are most people doing?
  • Analyse top search terms: what are people looking for in your site?
  • Speak to support staff: what are the most common requests for help?
  • Brainstorm a potential (long) list of tasks and survey users to find the top five.
  • Use an intercept survey on your web site (“Why did you visit the site today?”)

For example, let’s say that we’re developing a car satnav system. Our initial, long list of tasks might include:

  • Plan an itinerary
  • Get live traffic information
  • Find an alternative route
  • Plan a route
  • Connect a phone via Bluetooth
  • Advanced lane guidance
  • Change the voice
  • Navigate home
  • Add a favourite location
  • Set up voice commands
  • Change the map colours
  • Speak street names
  • Change the display brightness
  • Set flight mode
  • Set route planning preferences

We’re looking for tasks that are carried out by most or all of the people most or all of the time. This helps us prioritise the list and arrive at a smaller set of tasks: the red routes. For example, these might be:

  • Plan a route
  • Navigate home
  • Find an alternative route
  • Add a favourite

Create a user story

Our next step is to create a user story so we can think about the context of the task.

User stories are a key component of agile development, but you don’t need to be using agile to benefit from them. User stories have a particular structure:

“As a user, I want to' so that I can'”

Traditionally, these are written on index cards, so you’ll hear people in agile teams talk about ‘story cards’.

We like Anders Ramsay’s take on user stories where the persona “narrates’ the story, because this ensures the scenario is fully grounded. Now, instead of thinking of generic segments (like ‘holidaymaker’ or ‘sales rep’ as users of a satnav system), we’re thinking of the needs and goals of a specific persona. So for example, for the “Find an alternative route” red route we might write:

Justin says: “I want to find an alternative route so that I can avoid motorways”

…where Justin is a 50-year-old holidaymaker who uses his satnav intermittently and has low technology experience.

Define success and how you’ll measure it

In this step, we need to define what success means on this task. This is where our previous hard work, thinking of red routes and user stories, becomes critical.

Without those earlier steps, if you try to define a successful user experience, the chances are that you’ll come up with generic statements like “easy to use”, “enjoyable” or “quick to learn”. It’s not that these aren’t worthy design goals, it’s that they are simply too abstract to help you measure the user experience.

But with our earlier investment in red routes and user stories, we can now decide what success means for those specific tasks. So for example, we can now talk about red route-specific measures, such as task success, time taken and overall satisfaction. For our specific example, we might define success as the percentage of people who successfully manage to calculate a motorway-free route, measured by usability testing.

Note that although we’ve been talking about a single UX metric for simplicity, a typical system is likely to have a handful of UX metrics. These will cover each of the red routes for the system.

Assign values to the criteria

Deciding on the precise criteria to use for the UX metric requires you to have some benchmark to compare it with. This could be a competitor system or it could be the performance of the previous product. Without these data you can’t say if the current design is an improvement over the previous one.

If your system is truly novel, then try thinking about that ultimate business metric: revenue. How much money do you expect to make from this product? What difference in revenue might arise from a success rate of (say) 70% versus 80%? For more on this, read Philip Hodgson's related article, Making usability metrics count.

When setting these values, it’s useful to consider both ‘target’ values and ‘minimum’ values. So for example, if competitor systems are scoring at 70%, we may set our target value at 80% with a minimum acceptable value of 70%.

Monitor throughout development

The final step is to monitor progress throughout design but we need to achieve this without the expense of large sample, lab-based usability testing.

For tracking metrics, remote, on-line usability tests are ideal, because they’re cheap to run and quick to set up. For example, at the beginning of the project, a simple web-based usability test, such as “first click” testing on a screen shot, may be sufficient. (In the past, we’ve used Verify for this, but there are many companies offering this kind of service).

You’ll find that a test can be set up in less than an hour, with results available in a day or so. Quicker, in fact, than holding a meeting to discuss the design with the team (with the added benefit that any design changes that result will be based on data rather than opinion). Sample sizes of over 100 participants are easy to achieve and will ensure that the data you collect can be generalised. (Be sure to preface your test with a short screener to make sure participants are representative).

As you move towards a more fleshed-out system, you can monitor progress with other remote testing tools, such as benchmark tests, where participants complete an end-to-end task with a design that's more functionally rich. The important thing is to test early and often, towards the end of each sprint, to measure progress.

For your management report, we’re again after something lightweight with the minimum of documentation, so a simple table or graphic like this works fine.

Example of a user experience metric
Red Route User story Measure Metric Status as of Sprint 3
Plan a route Justin says: “I want to find an alternative route so that I can avoid motorways” Percentage of people who successfully manage to calculate a motorway-free route, measured by usability testing Minimum: 70% Target: 80% 73% (Below target)

This is the strength of metrics-based testing: it gives us the what of usability.

But to discover how get the score up to 80%, we need to understand why participants are struggling. And that leads us to an important final point.

Metrics-based versus lab-based usability tests

Metrics-based usability tests aren’t an alternative to traditional lab-based tests: they just answer different questions. Like an A/B test that can prove one design is better than another but can't say why, it's often hard to find out why a participant has failed on a metrics-based test. You could ask your online participants why, with a post-test survey, but there's little benefit because people are poor at introspecting into their own behaviour.

In my experience, working with a range of companies who use a variety of development methods, the most successful teams combine large sample, unmoderated usability tests to capture the what with small sample usability tests to understand the why.

Until now, metrics-based testing has played a minor role in user centred design projects when compared with lab-based testing. But the current avalanche of cheap, quick and reliable online tools will surely reverse that trend.

About the author

David Travis

Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher. If you like his articles, you might enjoy his free online user experience course.



Foundation Certificate in UX

Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details

Download the best of Userfocus. For free.

100s of pages of practical advice on user experience, in handy portable form. 'Bright Ideas' eBooks.

Related articles & resources

This article is tagged strategy, metrics, usability testing.


Our services

Let us help you create great customer experiences.

Training courses

Join our community of UX professionals who get their user experience training from Userfocus. See our curriculum.

David Travis Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher.

Get help with…

If you liked this, try…

Get our newsletter (And a free guide to usability test moderation)
No thanks