I spend a lot of my time working with early- and mid-career user researchers. Among other skills, I coach them on how to do usability tests. There’s a question I ask early on to check their level of knowledge: “How many participants do you need in a usability test?”
Inevitably, they have heard that 5 is a magic number. Some of the more experienced go further: “5 participants are all you need to get 85% of the usability problems.” Not all of them mention 85%. Some say 80%. Some say ‘most’.
Although this belief is widely held, it’s a myth. Testing 5 users will not find 85% of the usability problems in a system. It’s entirely likely that testing 5 users will find a fraction of the total number of usability problems.
The myth of the magic number 5
The myth isn’t due to the original research but in the way the research has been interpreted. The statement needs an important qualification. The correct formulation is: “5 participants are enough to get 85% of the usability problems that affect 1 in 3 users.” On first reading, this may sound like splitting hairs. But in fact it’s critical to understanding how you can find more usability problems in your test, with or without increasing the number of participants.
To explain why this is the case, assume your interface has a single usability problem. Let’s say you have a novel kind of slider that people use to enter a number in a form. That’s not a great way to enter numbers in a form and some people will struggle to use it. How many users will we need to test to detect this problem?
The answer is: it depends. It depends on how many users it affects. For some people, the usability problem may not be an issue. They may be tech savvy and find it a breeze to use the control. For others, it may prevent them from completing the task. They may not even know where to start with the slider.
Because a usability problem rarely affects every single user, we need to refine the question and ask, “How many users will we need to test to find a problem that affects a fixed percentage of users?”
Researchers typically set this percentage at 31% — let’s call that 1 in 3 users to make the sums easy. Now let’s run a test.
Our first user comes in and we have a 1 in 3 chance of spotting the problem. Our second user comes in and we have a 1 in 3 chance of spotting the problem. Our third user comes in and we have a 1 in 3 chance of spotting the problem. You’d think, given that we’ve now tested 3 users, we should have found the problem, but statistics doesn’t work like that. It’s like tossing a coin: sometimes you might have to toss a coin more than twice to get a heads even though the likelihood of getting a heads is 50%. Because of the way probability works, you actually need to test with more than 3 users to find a problem that affects 1 in 3 users.
How many? Again, we can’t be exact: we have to be probabilistic. What we can say is that, if you test with 5 users, you have an 85% chance of finding a problem that affects 1 in 3 users. (If you’d like more details on this, Jeff Sauro has a great article that includes calculators you can play with to understand probability.
Some critical usability problems affect few users
The reason this matters is that some important usability problems affect a small number of users. For example: hint text inside form fields. Some people mistake hint text for a form field entry: they think the field is already completed. Other people get mixed up trying to delete the placeholder text. For most people (say 90%) it’s not a problem. But for the 10% of users who do experience this problem, it means they really struggle to complete the form.
If you’re designing a system to be used by a wide range of users (like a government system) this really matters. Because what if a problem affects not 1 in 3 users but 1 in 10 users? How many users will we need to test to find that problem? It turns out you need to test 18 users to have an 85% chance of finding that problem.
So to say that 5 users will get 85% of all the problems in a system totally misrepresents the research. On some systems, testing 5 users may only find 10% of the total problems, because the other 90% of problems affect fewer than 1 in 3 users.
Increasing your chances of finding usability problems
If this is leaving you frustrated and thinking that you need to run usability tests with larger samples, fear not. There is a way to find more problems without increasing the number of users in your study. Here are three ideas:
- Include in your participant sample people with low digital skills. In other words, don’t just recruit the ones who are tech savvy. Including people with low digital skills will make it much more likely you’ll find problems that affect a low proportion of users.
- Ask participants to do more tasks. How many tasks participants try turns out to be a critical factor for finding problems in a usability test.
- Arrange to have several people from the design team observe the test and independently note down the problems they find. Research shows that the chances of you missing a critical usability problem that the other observers find is about 50–50.
If you still want to test more users, bravo! But rather than run one big test with lots of users, I’d encourage you to run multiple usability tests with smaller samples (perhaps every sprint). So long as your testing is part of an iterative design process (where you find problems, fix them and then test again), with a sample size of 5 you will eventually start finding some of those gnarly problems that affect fewer than 1 in 3 users.
About the author
Dr. David Travis (@userfocus on Twitter) is a User Experience Strategist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, why not join the thousands of other people taking his free online user experience course?
Foundation Certificate in UX
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most recent articles
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 16 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 19 articles tagged ethnography
- 14 articles tagged expert review
- 2 articles tagged fitts law
- 5 articles tagged focus groups
- 1 article tagged forms
- 7 articles tagged guidelines
- 11 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 11 articles tagged iterative design
- 3 articles tagged layout
- 2 articles tagged legal
- 11 articles tagged metrics
- 3 articles tagged mobile
- 8 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 17 articles tagged selling usability
- 12 articles tagged standards
- 47 articles tagged strategy
- 2 articles tagged style guide
- 5 articles tagged survey design
- 6 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 58 articles tagged usability testing
- 3 articles tagged user manual