Get hands-on practice in all the key areas of UX and prepare for the BCS Foundation Certificate.
Engaging a representative sample of participants in user research sounds like a good idea but it is flawed. It requires lots of participants, does not work in an agile development environment, stifles innovation and reduces your chances of finding problems in small sample usability tests. When combined with iterative design, theoretical sampling (where theory and data collection move hand in hand) provides a more practical alternative.
Sooner or later when you present your design research findings, someone will question your sampleís demographic representativeness. "How can you possibly understand our audience of thousands by speaking with just 5, or even 25, people?", you'll be asked. "How can you ensure that such a small sample is representative of the larger population?"
This question is based on the assumption that demographic representativeness is an important characteristic of design research. But this assumption is wrong for four reasons. The first two reasons are practical:
And the second two reasons are methodological:
Letís look at each of these in turn.
An obvious argument to make against demographic representativeness is that it results in sample sizes that are too large for almost all design research. For example, to make a sample representative of a demographic, you would aim for an equal balance of gender (male and female), domain knowledge (experts and novices), technical knowledge (digital savvy and digital novices), device types used (desktop and mobile), geographical location (urban and rural), age (Gen X and Millennial)… as well as additional, specific characteristics important for your particular product. To get even rudimentary coverage of these characteristics you will need a large sample size.
For example, with the characteristics Iíve listed above youíll need a sample size of 64 to end up with just one person representing a target demographic:
And what if that one participant is unusual in some other, non-representative way? Can one participant ever be representative of a segment? It seems that all weíve done is move the issue of Ďrepresentativenessí further down the chain. Boosting the sample size in each segment from one to (say) 5 participants (to make it more representative) means we now need a sample size of 320 participants. Very quickly, our sample size escalates dramatically as we build in more 'representativeness'.
This isnít practical for design research.
A second reason a representative sample is impractical is because it doesnít work with agile development. Recall our sample size of 64 participants from the previous section. This belongs to a world where we can define our research problem up front and plan exactly how to arrive at a solution. Yet no modern software development team works like this because requirements canít be nailed down in advance. Instead, development teams rely on iteration — and this is the same approach we should adopt as user researchers.
For help, we can turn to a different approach to participant sampling used by qualitative researchers. This approach is very different to the criteria researchers use for statistical sampling. In particular, the qualitative researcher does not sample participants randomly. Instead, theory and data collection move hand in hand. The qualitative researcher simultaneously collects and analyses data while also deciding what data to collect next and what participants to include. In other words, the process of data collection is controlled by the researcher's emerging understanding of the overall story. This sampling technique is known as Ďtheoretical samplingí.
This means the user researcher should select individuals, groups, and so on according to the expected level of new insights. You want to find users that will give you the greatest insights, viewed in the context of the data you've already collected.
Itís easy to see how you can adapt this approach to working in sprints with an agile team. Rather than do all of the research up front, we do just enough research to help the team move forward. Our sample size and its representativeness both increase as the project develops.
These two practical issues of representativeness — that it requires a large sample size and it doesnít fit with agile ways of working—are important. But they do not fully address the point made by our critic. Practical research methods are great but we canít use impracticality as our defence against shoddy research.
But these are not the only issues.
There are methodological issues too. Aiming for a Ďrepresentativeí sample in user research stifles innovation and it reduces your chances of finding problems in small sample usability tests. Letís turn to those issues now.
A third problem with representative samples is that they stifle innovation. With research in the discovery phase, when we are trying to innovate and come up with new product ideas, we donít know our audience.
Stop for a second and let that sink in: we don't know our audience — or at least, we know it only roughly. There is no list of people that we can select from because we donít have any customers — we may not even have a product. Indeed, part of the user researcherís role in the discovery phase is to challenge what their team think of as Ďthe productí. The role of the user researcher is to help development teams see beyond their tool to the userís context, to understand usersí unmet needs, goals and motivations.
Since we donít know who our final audience will be, itís impossible to sample the audience in any way thatís representative. It would be like trying to sample people who will be driving an electric car in 2030. Even if we already have a list of customers who use an existing product, we canít use only those people in our discovery research, because then we are speaking only to the converted. This reduces opportunities for innovation because we are speaking only to people whose needs we have already met.
Instead, to be truly innovative, we need to discover the boundaries and the rough shape of the experience for which we are designing. Rather than make predictions about an audience, innovative discovery research tries to make predictions about an experience. Youíre creating tests and hypotheses to help you understand whatís going on.
As an example, letís say I want to understand how people use headphones because I want to innovate in the headphone product space. I don't pick a representative sample of headphone users. Instead I start somewhere — almost anywhere. Maybe with a commuter who wears headphones on the train.
Then I ask myself: "Who is most different from that user? Who would be the Ďoppositeí?Ē That leads me to someone who has an entirely different context: perhaps an audiophile who uses headphones only at home.
But I need to explore this space fully. So letís look at some of the edges: working musicians; sound recordists; teenagers who listen to bouncy techno.
Letís look further and, to adopt the terminology of jobs-to-be-done, we might question what ďjobĒ the headphones are doing. If people are using headphones to shield out noise at work from co-workers, then maybe we want to understand the experience of people who wear ear defenders. If people are using headphones to learn a new language on their commute then maybe we want to look at the way people learn a foreign language online.
One of my favourite examples comes from IDEO. They were designing a new kind of sandal. They expressly included outliers in their sample, like podiatrists and foot fetishists, to see what they could learn. This is what I mean by understanding the boundaries of the research domain.
We canít use this same defence when it comes to usability testing. Now we know the rough shape of our audience: it would be foolish to involve (say) foreign language learners in a usability test of headphones aimed at working musicians. We need to match our participants to the tasks that they carry out.
But recall that a usability test typically involves a small number of participants (5 has become the industry standard). This is because 5 participants gives us an 85% chance of finding a problem that affects 1 in 3 users. However, some important usability problems affect a small number of users. On some systems, testing 5 users may only find 10% of the total problems, because the other 90% of problems affect fewer than 1 in 3 users.
To get the most value out of our usability test, it therefore makes sense to bias our sample to include participants who are more likely to experience problems with our product. This type of person might be less digitally savvy or may have less domain expertise than the norm.
This means you want to avoid having too many participants in your usability test sample who are technically proficient (even if they are otherwise representative of your audience). This is because these types of participant will be able to solve almost any technical riddle you throw at them. Instead, you should actively bias your sample towards people with lower digital skills and lower domain knowledge. Including people like this in your sample will make it much more likely youíll find problems that affect a low proportion of users. This helps you make the most of your 5 participants.
Just to be clear, Iím not saying you should test your product with total novices. Participants in a usability test should be (potential) users of the product you're testing. If your product is aimed at air traffic controllers, thatís where you draw your participant sample from. But to make most use of your small sample, recruit air traffic controllers who have less domain knowledge or lower digital skills than the norm for that group. In other words, bias your sample towards the left of the bell curve.
Thereís always the (unlikely but statistically possible) chance that every one of your participants in a round of research is unrepresentative in an important way. This will send the development team off at a tangent and risks derailing the project. For example, recruiting usability test participants who are less digitally savvy than the norm may result in false positives: mistakenly reporting a usability problem when one doesn't exist. Why isnít this more of an issue?
The reason this isn't a serious issue is because of the power of iterative design. We involve a small sample of participants in our research and make some design decisions based on the outcomes. Some of these decisions will be right and some will be wrong (false positives). But with iterative design, we donít stop there. These decisions lead to a new set of hypotheses that we test, perhaps with field visits to users or by creating a prototype. In this second round of research we involve another small sample of participants — but crucially a different sample than before. This helps us identify the poor design decisions we made in earlier research sessions and reinforces the good decisions. We iterate and research again. Iterative design is the methodology that prevents us from making serious mistakes with our research findings because it leverages the power of the experimental method to weed out our mistakes.
I titled this article, "Why you don't need a 'representative sample' in your user research" but I could have gone further: in my view, you should actively avoid a Ďrepresentative sampleí. Thatís because our goal is not about delivering a representative sample but about delivering representative research. User researchers can achieve this by combining iterative design with theoretical sampling.
Thanks to Philip Hodgson and Todd Zazelenchuk for comments on an earlier draft of this article.
Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher. If you like his articles, you might enjoy his free online user experience course.
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Our most recent videos
Our most recent articles
copyright © Userfocus 2021.
Get hands-on practice in all the key areas of UX and prepare for the BCS Foundation Certificate.
We can tailor our user research and design courses to address the specific issues facing your development team.
Users don't always know what they want and their opinions can be unreliable — so we help you get behind your users' behaviour.