Drinking from a fire hydrant

Running a usability test has been compared with taking a drink from a fire hydrant: you get swamped with data in the form of usability issues that need to be organised, prioritised and (hopefully) fixed. Although it's tempting to use your own judgement in determining severity, this causes a difficulty when a developer challenges your decision: "How did you make that issue critical? I think it's more of a medium problem".

Having a standard process for defining severity means that you can be consistent in the way you assign severity and means that you provide the transparency needed for people to check your work.

In fact, by asking just 3 questions, we can classify any usability problem.

Does the problem occur on a red route?

Red routes — frequent or critical tasks — are the most important tasks that the system needs to support, by definition. For example, if the "on-off" button on your newly designed gadget is hard to operate, all of your users will be affected. Because problems on red routes affect more users, they are more severe.

Is the problem difficult for users to overcome?

Some usability problems are show-stoppers: users just can't proceed. For example, if an important control is hidden in a dialogue box or behind a right click, the functionality may as well not exist for some users. Other usability problems are easy to workaround. Hard to solve problems are more severe because they have a bigger impact on completion rate.

Is the problem persistent?

Persistent problems — problems that keep cropping up — are more severe because they have a bigger impact on time on task and on customer satisfaction. An example of a persistent problem is a web site that doesn't have underlined hyperlinks. This means users can find the links only by "minesweeping" over the page. This problem is persistent, because even when users know the solution to the problem they still have to experience it. Note that "persistent" means that the problem occurs repeatedly, throughout the interface — users come across the problem on multiple screens or pages.

We can put these three questions in a process diagram and use it to define four severity levels.

A decision tree for usability issues

Download this decision tree as a pdf

How should you interpret the severity levels?

Critical
This usability problem will make some customers unwilling or unable to complete a common task. Fix urgently.
Serious
This usability problem will significantly slow down some customers when completing a common task and may cause customers to find a workaround. Fix as soon as possible.
Medium
This usability problem will make some customers feel frustrated or irritated but will not affect task completion. Fix during the next "business as usual" update.
Low
This is a quality problem, for example a cosmetic issue or a spelling error. Note: Although this is a minor issue in isolation, too many "lows" will negatively affect credibility and may damage your brand.

Moving beyond intuition

If you currently prioritise usability problems using 'gut feel' or intuition, you run the risk of being exposed as a fraud by a developer or manager who asks you to justify your priority ranking. Instead, deliver more robust findings by using this decision tree.

About the author

David Travis

Dr. David Travis (@userfocus on Twitter) is a User Experience Strategist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, why not join the thousands of other people taking his free online user experience course?



Foundation Certificate in UX

Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details

Download the best of Userfocus. For free.

100s of pages of practical advice on user experience, in handy portable form. 'Bright Ideas' eBooks.

If you liked this, try…

Two measures commonly taken in a usability test — success rate and time on task — are the critical numbers you need to prove the benefits of almost any potential design change. These values can be re-expressed in the language that managers understand: the expected financial benefit. Two measures that will justify any design change.

Related articles & resources

This article is tagged expert review, heuristic evaluation, metrics, usability testing.


Our services

Let us help you create great customer experiences.

Training courses

Join our community of UX professionals who get their user experience training from Userfocus. See our curriculum.

David Travis Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher.

Get help with…

If you liked this, try…

Get our newsletter (And a free guide to usability test moderation)
No thanks