Jane’s story

Jane had everything in place. The lab had been booked out for the full day. She’d checked the recording software and knew that she’d have good recordings of the session to playback later. She had a developer on hand who could dial into the server if the iPad prototype crashed and needed to be restarted. She had even remembered coffee and snacks for the clients on the other side of the one-way mirror. What could go wrong?

The participant was waiting outside and Jane invited him in. Thinking back on it later, she realised this was the point where she felt things might not go to plan. When the participant saw the iPad he seemed hesitant and a little confused. She put it down to test anxiety and asked him to take a seat. She then went through her usual speech — “We’re not testing you” — and handed him the first task along with the iPad.

The participant turned over the iPad, then looked at the side of it. He paused and looked at Jane. “Errm…how do you turn it on?” he asked.

Jane adopted her best quizzical face. “I thought you owned an iPad?” she said. The screener had specifically asked this question.

“Oh no,” he continued. “I don’t actually own one personally. But I’ve used one in the Apple Store. If you just turn it on for me, I’ll be fine.”

Despite the soundproofing, Jane was sure she heard a gasp from behind the one-way mirror.

A taxonomy of difficult characters

Jane’s experience may seem a bit extreme, but I’ve actually seen this happen in a test I observed (though I’ve changed the names to protect the innocent). Test participants sometimes tell half-truths because they are seduced by the incentive. Or they may just be nosy and want to see what consumer research looks like from the inside. Or an over-eager recruiter may sometimes relax criteria to hit the numbers.

But how about other difficult characters? I polled some UX colleagues to find out about the kind of difficult usability test participants they’ve experienced in the past. I combined these with my own experience and classified them into the following groups:

  • Participants who should never have been recruited.
  • Participants who don’t think aloud properly.
  • Participants who don’t want to criticise the design.
  • Participants who are anxious.
  • Lost souls.

Let’s look at each of these in turn.

Participants who should never have been recruited

This group of difficult participants may:

  • Claim to be frequent web users but can barely use the internet or a mouse.
  • Fake ownership of a device (like an iPad).
  • Hold a grudge against the brand you’re testing and want to complain about the service they received in the past.
  • Be trying to make a living from consumer research (they are “professional test participants”).
  • Have strong privacy concerns and don’t want you to record their face, voice or what they do on screen.

Participants who don’t think aloud properly

This group of difficult participants may:

  • Talk too little.
  • Read aloud every word on the screen, but fail to tell you why they’re clicking a link.
  • Treat it like an interview and discuss the task with you, rather than actually doing the task.
  • Ramble, go completely off topic and refuse to return to the task until you hear them out.
  • Adopt the persona of an expert reviewer and want to show you how much they know: “I’ve never been a fan of fly out menus as an interaction design technique”.

Participants who don’t want to criticise the design

This group of difficult participants may:

  • Clearly dislike something about the system but are too nice or too polite to tell you.
  • Think it’s a test of their ability and desperately want to give you the “right” answers.
  • Fail to complete the tasks — but then give the system top marks on a post-test survey.

Participants who are anxious

This group of difficult participants are typically:

  • Incredibly nervous.
  • In need of continual reassurance and frequently ask, “Am I doing it right?”
  • Overly cautious before simple actions, like clicking a link.

Lost souls

This is really my “miscellaneous” category. This group of difficult participants may:

  • Fail to arrive or turn up late and cause you to overrun.
  • Be disinterested and refuse to really engage with the test (2 clicks and they’re done).
  • Denigrate the system you’re testing just because it’s new (they prefer an old system and don’t want it changed).
  • Have poor hygiene (”the chronically soap-shy and stinky” as one of my colleagues so eloquently put it).

Towards a solution: thinking of the test from the participant’s perspective

Before listing some specific solutions for each of these character types, I’d like to suggest a general approach for dealing with difficult usability test participants.

Start by thinking of the test from the participant’s perspective.

Very few of the participants we experience as “difficult” woke up the morning of our test and decided to be a difficult person that day. Think of a time when you’ve been a “difficult” person: perhaps when you received an unwanted phone call from a call centre, or when you were made to wait far too long in line, or when you returned a meal at a restaurant. Someone else thought of you as a difficult person that day (and are probably searching the web for tips like this for dealing with you).

The reality is a little more complex: it’s nearly always a situation that makes people awkward. As much as we like to blame people for their behaviour, it’s rarely the case that people are awkward by nature. Here’s some examples that may explain some of the behaviours I’ve listed above:

  • The participant might have been anxious because of the testing situation. He or she finds the recording equipment intrusive or the participant is worried that the test is some kind of elaborate scam.
  • Perhaps the software is very hard to use, making the participant feel stupid.
  • Maybe the tasks we’ve set the participant aren’t realistic: the participant doesn’t understand what to do.
  • Something bad may be going on in the participant’s life (perhaps the participant got a tax bill in the post before coming in)
  • The participant may have been poorly recruited and doesn’t have the necessary expertise to take part.
  • The moderator is anxious for some reason and this is being transferred to the participant.

Whether we blame the person or the situation matters. If we blame the person, then there’s little that we can do to make things better: it’s the way they are. But the situation is something that we have control over. Depending on what’s causing your situation, here’s some ideas that might help.

Dealing with participants who should never have been recruited

We’ll never be able to root out the chronic liars and fakers, but we can control the situation by recruiting properly and by managing participants' expectations.

During recruitment, create a screener that focuses on behaviour rather than demographics. I’ve covered this in more depth in writing the perfect participant screener, but essentially the trick is as follows. If you want to distinguish “high” and “low” technology expertise, don’t just ask people how long they spend on the Internet. One person’s “frequently” is another person’s “sometimes”. Instead, ask them to self-identify with statements of behaviour that reflect low to high expertise. For example, ask: “Which of the following statements best describes your relationship with technology?” and provide options such as:

  • “I avoid using technology wherever possible and rely on other people to help me”.
  • “I use technology and am still learning how best to incorporate it into my life”.
  • “I feel comfortable with technology and feel I now know the basics”.
  • “I like technology and can mostly troubleshoot any problems that occur on my own”.
  • “Technology is a real passion of mine and people come to me for help with technical issues when they get stuck”.

Similarly, your screener should ask a question like, “When did you last take part in a focus group or a usability test?” and decline people who have taken part in the last 6 months or so.

To avoid ’device fakery’, get the recruiter to tell participants to bring in the mobile device they claim to own, even if they’ll be using something else in your test. And make sure participants realise that their participation (and incentive) requires this. Also make sure they're told about session recordings so that your webcam doesn’t come as a surprise to them when they turn up. If you suspect that any of this isn't made clear at the recruitment phase then re-iterate it when you send out the joining instructions.

If you have a participant who simply wants to vent about a past poor experience, then give them the opportunity to get the complaint off their chest, promise to pass it on to the appropriate person (if that's possible), and then move on.

If all else fails, stop the usability test and do something else entirely — for example, turn it into an interview instead. Use the situation as a way to find out more about this class of user. There's little point in collecting data you're not going to use.

Dealing with participants who don’t think aloud properly

Since our mantra is to control the situation, the best approach is to prevent this happening in the first place.

You can manage this by taking two deliberate steps. First, provide clear thinking aloud instructions at the beginning of the test. For example, “When you are using the web site, I'd like you to think out loud. This means I want you to tell me what you're thinking about as you use it. For example, I'd like you to say what it is you are trying to do, what you are looking for and any decisions you are making. If you get stuck or feel confused, I'd like to hear that too.”

And second, ask participants to practice the technique before they start the test. For example, ask participants to talk through sending a text message from their phone, or changing the staples in a stapler or adjusting the height of their chair. Don’t start the test until you’re sure they are thinking aloud properly (don’t collude with poor performance).

Similarly, before each task, ask participants to read the task aloud and repeat it back in their own words. This emphasises the task-orientation of the test and also ensures participants know what they’re doing.

If the problem occurs mid-way through a task, re-direct the participant to the task card and ask them to read the task again. "I’d like to make the best of the limited time we have with you today, so I’m going to re-focus you on the task you’re working on", is a useful phrase to use at this point.

Dealing with participants who don’t want to criticise the design

Few participants will feel comfortable criticising the design if they think you had a hand in designing it. So at every possible opportunity, emphasise your independence from the design team, even if this is a white lie. Make sure participants realise your role is to provide honest feedback to the design team and there’s nothing the participant can say about the product that will offend you.

A related approach is to flatter the participant: tell them that the system is aimed exactly at people like them and their first reactions are important.

If all else fails, play Devil’s advocate and invite criticism. A carefully worded phrase will effectively give them permission to be negative. For example: “You’ve given us some great feedback about this product. My job is to come back to the team with a balanced set of views, so if I had to press you for something you don’t like about this, what would it be?”.

Dealing with participants who are anxious

Most people find a test situation anxiety-provoking, so it’s important to defuse the situation as much as we can. If the participant is especially anxious, you may be able to detect this when you meet them in reception. If that's the case, do a different introduction: perhaps chat more in reception and then maybe grab a coffee quickly before heading over to the usability lab.

Another approach is to demystify the test situation: for example, show the participant what’s behind the curtain (or the one-way mirror) and introduce the participant to any observers. Of course, if your control room is full of monitors and looks like NASA mission control, you might want to skip this step.

A good way of removing test anxiety in the session itself is to start with an easy task, such as asking them to do a search of the site, or follow some instructions in help and support. You may not use the results of this task in your report, but it gives the participant a way of easing into the test and gaining confidence.

If the problem is happening on a specific task, then stop the problematic task and move on. You could even consider ditching your canned tasks and ask participants to define their own tasks with the system. With this type of task, you ask participants what they expect to do with the web site or product and then you test out that scenario. Participants often find these kinds of usability test tasks more motivating.

If you find that the participant’s anxiety appears to be focused on the recording equipment (for example, the participant keeps glancing at the camera) then suggest turning off any recording equipment and start taking written notes instead.

If all else fails, take a 10-minute break. Be authentic and talk about the elephant in the room: “You seem quite anxious. Is there anything I can do to help?”

Dealing with lost souls

If you get one of these participants I’m not sure there’s anything that you could have done ahead of time to prevent the problem. These are the hard rump of difficult participants who probably did wake up that morning and decided to be a fully paid up member of the awkward squad. But remember: these represent a small fraction of the participants you encounter in a usability test: you may go your whole career without meeting one of them.

The only solution to participants like these is have a ’floater’ (a participant who agrees to come along and stay for the day, on the off-chance that you reject a participant or have a no-show.) Floaters, understandably, will require a much higher incentive. For critical tests, you might even want to double recruit: recruit two participants for each slot and then send the less suitable one away (you’ll still need to give them the incentive).

One character I chose not to mention above (because I've not experienced him myself) is the seriously scary lunatic you don’t want to be alone in a room with. If you turn up at reception and find this kind of character staring at you then my suggestion is to say, "Actually we have all the data we need. Thanks for coming in, here’s your incentive, goodbye," and then quickly make your exit.

Conclusion

Most difficult usability test participants are really reacting to a difficult situation that we have put them in. With careful planning when you set up the study and by defusing potential problems during the test, you’ll find that most of your participants are problem-free.

Acknowledgements

Thanks also to the following people who shared their usability testing experience with me: @anne339, @Ben_Dawson_, @david_z, @devan_, @domiikka, @duzkiez, @e_fficiency, @Formulate, @jochenWolters, @jockbu, @Lisa_Crosby, @malross, @mattycurry, @milesperg, @TomvB, @userpalooza, @worldobyrne and @zbrukas.

About the author

David Travis

Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher. If you like his articles, you might enjoy his free online user experience course.



Foundation Certificate in UX

Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details

Download the best of Userfocus. For free.

100s of pages of practical advice on user experience, in handy portable form. 'Bright Ideas' eBooks.

Related articles & resources

This article is tagged moderating, usability testing.


Our services

Let us help you create great customer experiences.

Training courses

Join our community of UX professionals who get their user experience training from Userfocus. See our curriculum.

David Travis Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher.

Get help with…

If you liked this, try…

Get our newsletter (And a free guide to usability test moderation)
No thanks