Drinking from a fire hydrant
Running a usability test has been compared with taking a drink from a fire hydrant: you get swamped with data in the form of usability issues that need to be organised, prioritised and (hopefully) fixed. Although it's tempting to use your own judgement in determining severity, this causes a difficulty when a developer challenges your decision: "How did you make that issue critical? I think it's more of a medium problem".
Having a standard process for defining severity means that you can be consistent in the way you assign severity and means that you provide the transparency needed for people to check your work.
In fact, by asking just 3 questions, we can classify any usability problem.
Does the problem occur on a red route?
Red routes — frequent or critical tasks — are the most important tasks that the system needs to support, by definition. For example, if the "on-off" button on your newly designed gadget is hard to operate, all of your users will be affected. Because problems on red routes affect more users, they are more severe.
Is the problem difficult for users to overcome?
Some usability problems are show-stoppers: users just can't proceed. For example, if an important control is hidden in a dialogue box or behind a right click, the functionality may as well not exist for some users. Other usability problems are easy to workaround. Hard to solve problems are more severe because they have a bigger impact on completion rate.
Is the problem persistent?
Persistent problems — problems that keep cropping up — are more severe because they have a bigger impact on time on task and on customer satisfaction. An example of a persistent problem is a web site that doesn't have underlined hyperlinks. This means users can find the links only by "minesweeping" over the page. This problem is persistent, because even when users know the solution to the problem they still have to experience it. Note that "persistent" means that the problem occurs repeatedly, throughout the interface — users come across the problem on multiple screens or pages.
We can put these three questions in a process diagram and use it to define four severity levels.
How should you interpret the severity levels?
- This usability problem will make some customers unwilling or unable to complete a common task. Fix urgently.
- This usability problem will significantly slow down some customers when completing a common task and may cause customers to find a workaround. Fix as soon as possible.
- This usability problem will make some customers feel frustrated or irritated but will not affect task completion. Fix during the next "business as usual" update.
- This is a quality problem, for example a cosmetic issue or a spelling error. Note: Although this is a minor issue in isolation, too many "lows" will negatively affect credibility and may damage your brand.
Moving beyond intuition
If you currently prioritise usability problems using 'gut feel' or intuition, you run the risk of being exposed as a fraud by a developer or manager who asks you to justify your priority ranking. Instead, deliver more robust findings by using this decision tree.
About the author
Dr. David Travis (@userfocus on Twitter) holds a BSc and a PhD in Psychology and he is a Chartered Psychologist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, you'll love his online user experience training course.
Love it? Hate it? Join the discussioncomments powered by Disqus
Online training in user experience
Get a job in UX and improve your web site's UI design with these in-depth, hands-on user experience training courses. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
If you liked this, try
Two measures commonly taken in a usability test — success rate and time on task — are the critical numbers you need to prove the benefits of almost any potential design change. These values can be re-expressed in the language that managers understand: the expected financial benefit. Two measures that will justify any design change.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 12 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 9 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 4 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 8 articles tagged iterative design
- 3 articles tagged layout
- 1 article tagged legal
- 10 articles tagged metrics
- 3 articles tagged mobile
- 7 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 15 articles tagged selling usability
- 12 articles tagged standards
- 39 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 46 articles tagged usability testing
- 3 articles tagged user manual