Numbers for numbers' sake?

When I was about 10 I used to sit by the side of the road with a small notebook and pencil and write down the license plate numbers of passing cars. I can still remember some of them: SVY673 and XDN210 spring readily to mind, and MDT20 was a particular favourite. Usually as I was writing down one number, ten more would whizz by unnoticed, but I didn't care. It was a good way of passing time and it felt important. It was also perfectly pointless.

I didn't realise it at the time, but the same approach to data collection exists in the adult world, too. Soon after a company discovers the value of usability, it wants to start measuring it. Over time, UX teams gather a sizeable body of metrics and begin to feel important — but these measurements are rarely, if ever, used for anything related to the business and they seldom seem to surface in design meetings or drive development decisions.

Has the exercise of writing down usability numbers become perfectly pointless too?

Let's step back for a moment and look at why most organisations measure stuff.

C-level managers use numbers because they have predictive value. For example, I may notice that when the number of people entering my store increases (at Christmas, say), so does the number of sales I make. So if I can increase the number of shoppers at other times of the year — for example, by advertising — I would expect that to lead to an increase in sales and profit. "Footfall" becomes a coincident metric that I can use to predict profit.

Most companies have no shortage of metrics relating to the success and failure of their products. They have sales figures, service incident rates, customer loyalty indicators, product return rates, customer support call-centre volumes, ‘customer instruct' rates and so on.

These are useful metrics, but they all suffer from the same problem: they are so-called ‘lagging' indicators. You can obtain these metrics only after you have launched the product — and sometimes long after the launch date. This is fine if the figures are strong, but if your product is failing, this is too late to find out about it. All you can focus on then is costly damage limitation via retroactive fixes, additional call-centre agents, and product replacements or, in some cases, a product recall.

What businesses need is a ‘leading' indicator: a metric that can predict product success or failure before the product has been released. That's where usability measurements come in.

Step 1: Measure usability

The international usability standard, ISO 9241, contains a definition of usability that we can use to operationalise and measure usability. I've written about this elsewhere, but briefly we need to measure the effectiveness and efficiency of a system (both can be measured objectively) and include a measure of user satisfaction (a subjective measure).

You need to measure these three components of usability for each red route, and then the results can be combined across tasks to give an overall measure of effectiveness, efficiency and satisfaction. Finally, you can aggregate the three measures into a single metric for usability.

When arriving at your single usability metric, you may want to weight one of your usability measures more highly than the others. For example, for a museum kiosk, effectiveness might be the most important of the three measures. For an intranet, efficiency might be the most important measure. And for a game on an iPhone you might want to make satisfaction the most important measure. The point is that although you need to collect all three measures to get a fully rounded picture of usability, it's OK to prioritise one of the measures over the others in coming up with your single usability metric.

Step 2: Correlate your UX metric with business metrics

In this step, you need to work out the predictive value of your metric. There's a slow way of doing this and a fast way.

The slow way is to gradually build up a database of usability metrics for your products, wait for them to be released and then afterwards examine the business metrics associated with the product, like return rate and calls to customer support. Depending on your product, this might take months at best and could even take years.

The quicker way is to run some usability tests of your products that are already in the market place. I'd suggest picking three products, one with (say) a high volume of calls to customer support; one with a ‘typical' volume of calls; and one with a lower-than-average volume of calls. Run a usability test of each product with about 20 participants and calculate the single usability metric for each one. You now have three data points you can use to predict call volume from a usability metric.

Step 3: Start predicting product success and failures

As your body of data increases you will be able to correlate usability metrics with business metrics and start to make predictions about the likely success of a product. You will be able to step up in a business meeting and say things like this: "Our new product has a usability metric of 53%. If we launch it now we can expect to see a customer satisfaction rate of only 40% and a return rate of 25%. In addition, the customer support agents are likely to see a 20% increase in call volume. Are we sure we want to risk this?"

User experience is about more than numbers

Paul Brodeur has written, "Statistics are human beings with the tears wiped off". The same could be said for usability measurements. I'd be the first to confess that these numbers capture only one aspect of the user experience — but it's an aspect that we can use to provide real business value and to ensure that user experience has a voice at the business table. It's not only about numbers, but numbers are certainly part of what we do. And if we're going to collect numbers, we owe it to ourselves to do something useful with them, and not simply write down the license plates of passing cars.

Sign up for our newsletter to hear about future articles where we'll show you how to use usability metrics to diagnose problems with your product or interface.

Acknowledgements

Thanks to Beth Maddix, Lynne Tan and David Travis for their comments on this article.

About the author

Philip Hodgson

Dr. Philip Hodgson (@bpusability on Twitter) has been a UX researcher for over 25 years. His work has influenced design for the US, European and Asian markets, for everything from banking software and medical devices to store displays, packaging and even baby care products. His book, Think Like a UX Researcher, was published in January 2019.



Foundation Certificate in UX

Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details

Download the best of Userfocus. For free.

100s of pages of practical advice on user experience, in handy portable form. 'Bright Ideas' eBooks.

Related articles & resources

This article is tagged ISO 9241, strategy, metrics, usability testing.


Our services

Let us help you create great customer experiences.

Training courses

Join our community of UX professionals who get their user experience training from Userfocus. See our curriculum.

Phillip Hodgson Dr. Philip Hodgson (@bpusability on Twitter) has been a UX researcher for over 25 years. His work has influenced design for the US, European and Asian markets, for everything from banking software and medical devices to store displays, packaging and even baby care products. His book, Think Like a UX Researcher, was published in January 2019.

Get help with…

If you liked this, try…

Get our newsletter (And a free guide to usability test moderation)
No thanks