Some people approach a usability review like a dogmatic movie critic, prepared to give their opinion on an interface’s strengths and weaknesses.

This is the wrong mind set.

A design review is not about opinions, it’s about predicting how users will interact with an interface.

Here are 4 problems that you’ll need to address to ensure your review avoids personal opinion and will lead to a better interface.

Problem #1: The reviewer fails to take the user’s perspective

The hardest part of being a good user experience practitioner seems, at first sight, to be the easiest: taking the user’s perspective. It’s an easy slogan to spout, but like most slogans it’s also easy to forget what it means. I often hear reviewers preface a ‘problem’ they have identified with a sentence like, “I really hate it when I see…” or “Personally, when I use this kind of system…”

Here’s the difficult truth: it doesn’t matter what you like.

The interface may offend your aesthetic sensibilities, look clichéd or old-fashioned. It doesn’t matter — because you are not the user. As Kim Vicente has said:

“Ironically, the strength of the Wizards — the often brilliant designers of high-tech products and systems today — is also partially responsible for their downfall: since they have so much scientific and engineering expertise, they tend to think that everyone knows as much about technology as they do.” — Kim Vicente, ‘The Human Factor: Revolutionizing the Way People Live with Technology’.

This means that if you’re a member of a UX design team, you’re unlikely to be representative of your users. And if you review the interface from your own perspective, you’ll do a very poor job of predicting the problems that real users will have.

So before even starting the review you need a firm idea of your users and their goals. (If you can’t do this, consider testing with real users rather than carrying out a review). This step isn’t just a formality — it really helps you steer the review because it enables you to predict the future. “Predict the future” sounds like a bold statement, but consider this:

  • If you know the users’ goals, then you should be able to predict why the user is visiting the site.
  • If you know why the user is visiting the site then you should be able to predict the specific tasks that the user will be carrying out.
  • If you know the tasks, then you should be able to predict the most important features or functions that the user will be looking for to complete those tasks.
  • Putting all this together: you should now be able to predict where users are most likely to look on the screen, what other screen elements might distract them, and even where they are likely to click first.

A good usability review will begin with a data-driven description of the users of the product and a detailed description of the users’ tasks. If your review omits these, you’re almost certainly evaluating the product from your own perspective and as a consequence your findings will lack the predictive validity that your client needs.

Problem #2: The review is based on the opinion of one reviewer

We carry out an exercise on our expert review training course where we have a shoot-out between a single reviewer and a team of three. We regularly find that the single reviewer finds only around 60% of the usability issues found by the team. This isn’t a new finding: researchers have known for some time that you need 3-5 reviewers to get adequate coverage of usability issues in an expert review.

Adding multiple reviewers helps find more problems for a number of reasons:

  • Some reviewers have more domain knowledge than you (for example, they know a lot about finance if it’s a banking application), which means they can find problems you’ll miss.
  • Some reviewers tend to be sensitive to a sub-set of usability issues — for example, they may be more sensitive to visual design issues or issues to do with information architecture — and they tend to over-report those issues at the expense of other, equally important ones (like task orientation or help and support).
  • Some reviewers have had more exposure to users (either via usability tests or site visits) and this means they are better at identifying the usability issues that trip up people in the real world.
  • Different people just see the world differently.

But ego is a terrible thing. It’s almost as if people think that by asking other people to collaborate in the review, they are diminishing their status as ‘the expert’. In fact, the opposite is true: involving extra reviewers demonstrates a wider knowledge of the literature. Despite this, the majority of expert reviews that I come across are still carried out by a single reviewer.

A good usability review will combine results from at least three reviewers. If your review is based on the work of a single reviewer, it’s likely that you’ve only spotted around 60% of the usability issues.

Problem #3: The review uses a generic set of usability principles

All reviewers have their favourite set of usability principles, such as Nielsen’s heuristics or ISO’s dialogue principles. These principles are based on decades of research into human psychology and behaviour, which is a good thing as you can be sure that — unlike technology — they won’t change over time.

But this strength is also a weakness.

By their very nature, these principles are fairly generic and may even seem a little vague when applied to a new technology, like mobile. This is why an experienced reviewer will develop a usability checklist to interpret the principle for the technology and domain under review.

For example, take a principle like ‘User control and freedom’. This is one of Nielsen’s principles, developed prior to the web, and is expressed as follows: “Users often choose system functions by mistake and will need a clearly marked ‘emergency exit’ to leave the unwanted state without having to go through an extended dialogue.” This principle was developed for the graphical user interfaces that were in existence at the time. As a reviewer, this would remind you to check (amongst other things) that dialog boxes had a cancel button and that the interface supported undo. Fast forward to the web and these checks aren’t relevant to most web pages. To re-interpret this principle for web pages, we’ll probably want to check (amongst other things) that the web site doesn’t disable the back button and that there’s a clearly marked route back to ‘Home’ from all pages in the site.

So the guideline is still relevant but the way we check for compliance is different.

It takes some effort to generate a checklist for a specific technology — I know, as I spent weeks developing a usability checklist for the web based on various generic guidelines and heuristics. But it’s time well spent because having a checklist to hand when you carry out a review will ensure that you get full coverage of the principles and ensure none get forgotten.

A good usability review will use a checklist to interpret the principles for the specific technology under test. If you use the high-level principles only, you risk missing important usability issues.

Problem #4: The reviewer lacks experience

Many user interfaces are so bad that finding usability problems with your checklist is simple. But a checklist does not an expert make. You now have to decide if the ‘problem’ is a genuine issue that will affect real users, or if it’s a false alarm that most users won’t notice.

Sadly, there’s no simple way to distinguish between these two choices. Here’s a relevant quotation from Nobel prizewinner, Eric Kandel:

“Maturation as a scientist involves many components, but a key one for me was the development of taste, much as it is in the enjoyment of art, music, food or wine. One needs to learn what problems are important.” — Eric R. Kandel, ‘In Search of Memory’.

This analogy with ‘connoisseurship’ is interesting and applies equally to the issue of identifying usability problems. You need to learn what problems are important.

I have a friend who is a ceramics artist who told me the following story. She was asked to judge the ceramics section of an art show (about 20 artists) but included in her section were about 5 ‘mixed-media’ artists (including media like wood, metalwork and glass). For the ceramicists she was able to evaluate their work thoroughly — the aesthetics, the skill involved, the originality of the work, the craftsmanship — and she was able to give a rigorous critique of their pieces. But for the mixed-media art she could only use her personal opinion of what she liked or didn’t like. When it came to judging the craftsmanship she had no knowledge of what is involved in, say, blowing glass, or welding metal. But here’s the punchline… because she was uncertain, she found herself giving the mixed-media artists the benefit of the doubt and rating them higher. Generalising from this story: if you don’t understand the domain or the technology, you may tend to be more lenient — perhaps because if you are very critical you may have to justify and explain your judgement, and that could expose your lack of experience with the domain.

The risk is that you’ll fail to report an important usability problem.

One way you can develop ‘taste’ in the field of user experience is to break down the wall that so often separates the design team from users. For example:

  • Sit in on a usability test and observe the seemingly trivial user interface elements that stump test participants.
  • Spend time with your customers in their home or place of work so you truly grok their goals, aspirations and irritations.
  • Run a user research session, like a card sort, to appreciate how your users view the world.
  • Be a test participant.

A good usability review needs to be led by someone with experience. Without this practical knowledge you won’t be able to reliably distinguish the critical show stoppers from the false alarms.

Conclusion

Usability expert reviews are an efficient way to weed out usability bloopers from an interface — but only if they avoid personal opinion. Pay attention to these 4 common mistakes and you’ll find your reviews are more objective, more persuasive and more useful.

If you enjoyed this, you’ll love our training course: How to carry out a usability expert review.

About the author

David Travis

Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher. If you like his articles, you might enjoy his free online user experience course.



Foundation Certificate in UX

Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details

Download the best of Userfocus. For free.

100s of pages of practical advice on user experience, in handy portable form. 'Bright Ideas' eBooks.

Related articles & resources

This article is tagged expert review, heuristic evaluation.


Our services

Let us help you create great customer experiences.

Training courses

Join our community of UX professionals who get their user experience training from Userfocus. See our curriculum.

David Travis Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher.

Get help with…

If you liked this, try…

Get our newsletter (And a free guide to usability test moderation)
No thanks