Get hands-on practice in all the key areas of UX and prepare for the BCS Foundation Certificate.
"Know thy user" is the first principle of usability, so it's important that you involve the right kind of people in your usability study. These 8 tips for screening participants will show you how to recruit articulate, representative users for your research, quickly filter out the people you don't want and help you avoid the dreaded "no show".
Guy Goma. Photo from BBC News on YouTube.
A couple of years ago, the BBC interviewed Guy Goma about an Apple Computer court case. Mr Goma was presented as an IT expert but in fact was really at the BBC for a job interview. The researcher had collected him from reception by mistake, leaving behind the real "Guy", IT expert Guy Kewney. Mr Goma's facial reaction (pictured above) when he suddenly realises he is live on air and about to face a series of questions about music downloading is a sight to behold.
In this instance, having the name "Guy" was a necessary but not a sufficient condition to take part in the interview. "Participant screening" is where you sift through all the possible candidates to identify people who are truly suitable the participants. It's where you filter out the Guy Gomas and filter in the Guy Kewneys.
Here are our 8 tips for writing great participant screeners:
When faced with a new product or web site, a user's past behaviour will predict their performance much more accurately than their demographic profile will. Demographic factors like gender may be important segmentation variables for marketers but they have very little impact on the way someone actually uses a product. It's important to work with marketing managers to be sure you understand the targeted users but at the same time encourage them to describe users in terms of their behaviours rather than (say) their income level.
If the marketing folks seem vague, or can only describe the customer in high level terms ("We have two customer groups, people who use laptops and people who use desktops") then this should raise a red flag and suggests some field research is warranted.
But which behavioural variables should you use? On web projects, two behavioural variables used by every user experience researcher are Internet knowledge (the user's knowledge of web idioms like navigation, form filling and search) and task knowledge (the user's knowledge of the domain, for example photography, share dealing or genealogy). Design your screener so you can classify candidate's knowledge as "high" or "low" on both these factors, and recruit people accordingly.
It may be hard for you to get your screener past the product or marketing team without including any demographic questions whatsoever. In that case, place the demographic questions towards the end of the survey and just aim to get an appropriate mix.
If you want to distinguish "high" and "low" Internet knowledge, don't just ask people how long they spend on the Internet. One person's "frequently" is another person's "sometimes". Instead, ask what they do on the Internet. People with high Internet knowledge probably buy things online using a credit card, download and install software, belong to social networking sites like Facebook, manage their photos online, use bookmarking sites like del.icio.us, comment on blogs and subscribe to syndicated ("RSS") web feeds. People with low Internet knowledge will do fewer of these activities and novices will probably seek help from a friend or family member when using the Internet.
Broadly speaking, screeners contain two kinds of question: exclusion questions (where one answer will exclude the candidate, such as answering "Yes" to the question, "Do you work for a competitor company?") and balance questions (where you want to get an equal number of people in different categories, for example a balance of "high" and "low" Internet experience). Because of this, it's helpful to think of your screener as a funnel: ask the exclusion questions early to filter out unsuitable candidates as quickly as possible.
Incidentally, a quick aside on that question, "Do you work for a competitor company?" I've often seen screeners that start with a question of the form: "Do you, or does anyone in your household, work for any of the following organisations..." Any candidate faced with this question knows the "correct" answer and may well lie to get selected (a process known as "faking good"). Avoid this by simply asking an open question: "Where do you work?" or "Tell me about your job".
Sometimes candidates can pass all your screening questions but still not be what you want. For example, one of my colleagues was testing a web site that sold eye glasses. Everything went fine during the usability test until it got to the product selection part of the test. It turned out that, in real life, the participant said he took one of his "lady friends" along to the opticians to choose his glasses. So if you're testing an e-commerce site, make sure that your participants make the purchase decisions too.
If you are recruiting for a thinking aloud study, you need to screen out candidates who are shy or inarticulate. You can usually judge this by including an open question in your screener: for example, "Tell me about the way you shop for products online". But if you find that you're screening out lots of potential candidates with this requirement an audience of teenage boys for example you may need to rethink your methodology. You want to get representative users after all.
Also, if you are recruiting for an eye tracking study, you'll need to exclude people that wear bifocals, rimless glasses or lots of mascara. (We've yet to find a subtle way of asking that question. It's probably best to ask your participants not to wear any mascara on the day of the test).
At the beginning of the screening session, make sure your participant realises that answering the screening questions are a pre-requisite for taking part in the research, not the research itself. Clarify that the screener isn't the actual deal. Explain that the incentive will be paid in cash once the session is complete.
Once you have recruited participants, manage their expectations for the session. Most people's preconception of consumer research is the focus group, so if you're running a typical usability session make it clear that the participant will be interviewed alone. This is also a good time to let participants know that the session will be recorded to video and that they will be asked to sign a non-disclosure agreement. If any of these are deal-breakers, now would be a good time to find out.
At the same time, don't reveal too much in case participants decide to do their own research prior to the test. For example, if you tell participants that you'll be asking them to evaluate BigCo's web site this gives them the opportunity to go to the site and work on it in advance of the test, so as to "practice" for the test. Provide enough information to reassure the participant of the nature of the study, but not the specifics.
Test the screener on a handful of people you know you don't want, and on a handful you know you do want and make sure they fall into the right "bins". And critically, be sure that internal stakeholders "sign off" on the screener so that later they cannot dismiss the value of the study by saying you had the wrong participants. When a frustrated colleague observes your usability study and asks you, "Where did you get such stupid users?", you want to be sure you have a watertight response.
Participants that fail to turn up are the bane of the researcher's life. Not only is this frustrating and a waste of time, it's very expensive and embarrassing especially if you have a couple of senior managers in the observation room twiddling their thumbs. You need to avoid participant no-shows at all cost. Try these tips:
Following these suggestions will help, but there are no guarantees. So it's also worth getting back-up cover. Recruit "floaters": people who agree to turn up at the facility at the same time as the first participant and stay until that day's last participant has arrived. The floater's life is boring but it's well paid: we tend to pay floaters 2-4 times as much as regular participants. (Just make sure you have lots of magazines and newspapers for them to read).
For critical projects, you should also consider double-recruiting, where you recruit two participants for each slot. If both participants turn up, ask your observers to review each participant's screener and choose the person they want to take part. The other participant should remain at the facility for 15 minutes or so, just to make sure the chosen participant is up to scratch. Then he or she can be sent on his way with the full incentive.
If you use an external agency to do the recruiting, walk through the screener carefully with the recruiter and make sure there are no ambiguities. Make sure you speak with the actual recruiter: most recruitment agencies have a legion of subcontractors who actually do the leg work, and it's that person you need to speak with, not his or her manager. Explain to the recruiter why getting the wrong people is a serious problem for your study. You should also identify the questions where you can allow some flexibility and those questions where you can't this will make your recruiter's life a lot easier.
User research with unrepresentative users is a waste of time and money. It makes no sense to cut corners. I hope these suggestions help you set up and manage your own participant recruitment program; if you have any additional suggestions, please let me know.
Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher. If you like his articles, you might enjoy his free online user experience course.
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and get free, exclusive access to our reports and eBooks.
By listening to the questions venture capitalists pose when reviewing new products we can develop a checklist to assess the viability of a new product idea. Dragons' Den Usability.
This eBook contains all you need to make sure that you're fully prepared for your next usability test. Usability test plan toolkit.
Our most recent articles
copyright © Userfocus 2019.
Get hands-on practice in all the key areas of UX and prepare for the BCS Foundation Certificate.
We can tailor our user research and design courses to address the specific issues facing your development team.
Users don't always know what they want and their opinions can be unreliable — so we help you get behind your users' behaviour.