Participant screening gets the right Guy
A couple of years ago, the BBC interviewed Guy Goma about an Apple Computer court case. Mr Goma was presented as an IT expert but in fact was really at the BBC for a job interview. The researcher had collected him from reception by mistake, leaving behind the real "Guy", IT expert Guy Kewney. Mr Goma's facial reaction when he suddenly realises he is live on air and about to face a series of questions about music downloading is a sight to behold.
In this instance, having the name "Guy" was a necessary but not a sufficient condition to take part in the interview. "Participant screening" is where you sift through all the possible candidates to identify people who are truly suitable the participants. It's where you filter out the Guy Gomas and filter in the Guy Kewneys.
Here are our 8 tips for writing great participant screeners:
- Screen for behaviours, not demographics
- Ask precise questions
- Identify unsuitable candidates early
- Get value-for-money participants
- Manage each participant's expectations
- Pilot test the screener
- Avoid no shows
- Brief the recruiting company
Screen for behaviours, not demographics
When faced with a new product or web site, a user's past behaviour will predict their performance much more accurately than their demographic profile will. Demographic factors like gender may be important segmentation variables for marketers but they have very little impact on the way someone actually uses a product. It's important to work with marketing managers to be sure you understand the targeted users but at the same time encourage them to describe users in terms of their behaviours rather than (say) their income level.
If the marketing folks seem vague, or can only describe the customer in high level terms ("We have two customer groups, people who use laptops and people who use desktops") then this should raise a red flag and suggests some field research is warranted.
But which behavioural variables should you use? On web projects, two behavioural variables used by every user experience researcher are Internet knowledge (the user's knowledge of web idioms like navigation, form filling and search) and task knowledge (the user's knowledge of the domain, for example photography, share dealing or genealogy). Design your screener so you can classify candidate's knowledge as "high" or "low" on both these factors, and recruit people accordingly.
It may be hard for you to get your screener past the product or marketing team without including any demographic questions whatsoever. In that case, place the demographic questions towards the end of the survey and just aim to get an appropriate mix.
Ask precise questions
If you want to distinguish "high" and "low" Internet knowledge, don't just ask people how long they spend on the Internet. One person's "frequently" is another person's "sometimes". Instead, ask what they do on the Internet. People with high Internet knowledge probably buy things online using a credit card, download and install software, belong to social networking sites like Facebook, manage their photos online, use bookmarking sites like del.icio.us, comment on blogs and subscribe to syndicated ("RSS") web feeds. People with low Internet knowledge will do fewer of these activities and novices will probably seek help from a friend or family member when using the Internet.
Identify unsuitable candidates early
Broadly speaking, screeners contain two kinds of question: exclusion questions (where one answer will exclude the candidate, such as answering "Yes" to the question, "Do you work for a competitor company?") and balance questions (where you want to get an equal number of people in different categories, for example a balance of "high" and "low" Internet experience). Because of this, it's helpful to think of your screener as a funnel: ask the exclusion questions early to filter out unsuitable candidates as quickly as possible.
Incidentally, a quick aside on that question, "Do you work for a competitor company?" I've often seen screeners that start with a question of the form: "Do you, or does anyone in your household, work for any of the following organisations..." Any candidate faced with this question knows the "correct" answer and may well lie to get selected (a process known as "faking good"). Avoid this by simply asking an open question: "Where do you work?" or "Tell me about your job".
Sometimes candidates can pass all your screening questions but still not be what you want. For example, one of my colleagues was testing a web site that sold eye glasses. Everything went fine during the usability test until it got to the product selection part of the test. It turned out that, in real life, the participant said he took one of his "lady friends" along to the opticians to choose his glasses. So if you're testing an e-commerce site, make sure that your participants make the purchase decisions too.
Get value-for-money participants
If you are recruiting for a thinking aloud study, you need to screen out candidates who are shy or inarticulate. You can usually judge this by including an open question in your screener: for example, "Tell me about the way you shop for products online". But if you find that you're screening out lots of potential candidates with this requirement an audience of teenage boys for example you may need to rethink your methodology. You want to get representative users after all.
Also, if you are recruiting for an eye tracking study, you'll need to exclude people that wear bifocals, rimless glasses or lots of mascara. (We've yet to find a subtle way of asking that question. It's probably best to ask your participants not to wear any mascara on the day of the test).
Manage each participant's expectations
At the beginning of the screening session, make sure your participant realises that answering the screening questions are a pre-requisite for taking part in the research, not the research itself. Clarify that the screener isn't the actual deal. Explain that the incentive will be paid in cash once the session is complete.
Once you have recruited participants, manage their expectations for the session. Most people's preconception of consumer research is the focus group, so if you're running a typical usability session make it clear that the participant will be interviewed alone. This is also a good time to let participants know that the session will be recorded to video and that they will be asked to sign a non-disclosure agreement. If any of these are deal-breakers, now would be a good time to find out.
At the same time, don't reveal too much in case participants decide to do their own research prior to the test. For example, if you tell participants that you'll be asking them to evaluate BigCo's web site this gives them the opportunity to go to the site and work on it in advance of the test, so as to "practice" for the test. Provide enough information to reassure the participant of the nature of the study, but not the specifics.
Pilot test the screener
Test the screener on a handful of people you know you don't want, and on a handful you know you do want and make sure they fall into the right "bins". And critically, be sure that internal stakeholders "sign off" on the screener so that later they cannot dismiss the value of the study by saying you had the wrong participants. When a frustrated colleague observes your usability study and asks you, "Where did you get such stupid users?", you want to be sure you have a watertight response.
Avoid no shows
Participants that fail to turn up are the bane of the researcher's life. Not only is this frustrating and a waste of time, it's very expensive and embarrassing especially if you have a couple of senior managers in the observation room twiddling their thumbs. You need to avoid participant no-shows at all cost. Try these tips:
- Once you've recruited the participant, emphasise how important he or she is to the research. Phases like this help: "This product has been designed especially for people like you" and "You are exactly the kind of person we need for this research".
- Send the participant a map and directions to the facility. Send a proper letter, in the post, since this makes the event appear more real and tangible. Also send an email with a link to a web page that contains the same information, just in case the participant loses the letter.
- Give participants your phone number to call if they can't find the venue or if they are running late.
- Make sure all letters and emails are sent from a named individual rather than from a faceless group, such as 'The Usability/UX/Web/IT Team'.
- Get the participant's mobile phone number so you can call him or her. On the day before the session, ring the participant to confirm everything is still OK and re-send instructions by email. On the day of the test, send a text message reminding the participant of the start time.
Following these suggestions will help, but there are no guarantees. So it's also worth getting back-up cover. Recruit "floaters": people who agree to turn up at the facility at the same time as the first participant and stay until that day's last participant has arrived. The floater's life is boring but it's well paid: we tend to pay floaters 2-4 times as much as regular participants. (Just make sure you have lots of magazines and newspapers for them to read).
For critical projects, you should also consider double-recruiting, where you recruit two participants for each slot. If both participants turn up, ask your observers to review each participant's screener and choose the person they want to take part. The other participant should remain at the facility for 15 minutes or so, just to make sure the chosen participant is up to scratch. Then he or she can be sent on his way with the full incentive.
Brief the recruiting company
If you use an external agency to do the recruiting, walk through the screener carefully with the recruiter and make sure there are no ambiguities. Make sure you speak with the actual recruiter: most recruitment agencies have a legion of subcontractors who actually do the leg work, and it's that person you need to speak with, not his or her manager. Explain to the recruiter why getting the wrong people is a serious problem for your study. You should also identify the questions where you can allow some flexibility and those questions where you can't this will make your recruiter's life a lot easier.
User research with unrepresentative users is a waste of time and money. It makes no sense to cut corners. I hope these suggestions help you set up and manage your own participant recruitment program; if you have any additional suggestions, please let me know.
About the author
Dr. David Travis (@userfocus on Twitter) holds a BSc and a PhD in Psychology and he is a Chartered Psychologist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, you'll love his online user experience training course.
Foundation Certificate in UX
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
If you liked this, try
By listening to the questions venture capitalists pose when reviewing new products we can develop a checklist to assess the viability of a new product idea. Dragons' Den Usability.
Usability test plan toolkit
This eBook contains all you need to make sure that you're fully prepared for your next usability test. Usability test plan toolkit.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
- May 2: Measuring Usability With The System Usability Scale (SUS)
- Apr 4: 5 reasons why your first user research activity should be a usability test
- Mar 7: Keeping Yourself out of the Story: Controlling Experimenter Effects
- Feb 1: The 4 mistakes you’ll make as a usability test moderator
- Jan 4: Desk research: the what, why and how
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 12 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 10 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 4 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 9 articles tagged iterative design
- 3 articles tagged layout
- 1 article tagged legal
- 11 articles tagged metrics
- 3 articles tagged mobile
- 7 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 16 articles tagged selling usability
- 12 articles tagged standards
- 41 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 50 articles tagged usability testing
- 3 articles tagged user manual