Moderating a usability test is full of bear traps. The moderator may fail to set expectations (by reviewing the purpose of the test and describing the moderator’s role), forget to reassure the participant (“We’re not testing you”), or fail to check for understanding (by asking the participant to repeat the task in his or her own words). Other common mistakes include asking leading or biassed questions, quizzing participants on how they would design the interface and soliciting opinions rather than observing behaviour.
But there are four mistakes that I see usability test moderators make frequently and that trump all of these. They are:
- Talking too much
- Explaining the design
- Answering questions
- Interviewing rather than testing
Talking too much
When moderating a usability test, you need to fight against the tendency to talk too much. This can happen in two places: at the beginning of the test; and during the session itself.
It’s true that you need to provide an introduction to the session to put the participant at ease; and you also need to explain the kind of feedback that you want from the thinking aloud technique. But you shouldn’t go overboard in your introduction: 5 minutes or so is usually enough.
Usability testing is about observing participants while they carry out realistic tasks. This means the golden rule is to shut up. Although moderators tell me they know this, I still see many of them (even some experienced ones) failing to practice it. In the white heat of the test session, they can’t stop themselves from filling the silence. It’s like the old chestnut about why you need one man and his dog to moderate a usability test. The man is there to feed the dog and the dog is there to bite the man if he starts talking.
This happens partly because we’re not comfortable with silence and partly because there’s a misconception that if the participant isn’t speaking, then you’re not learning anything. Because you’re interested in participant behaviour, it’s fine to have periods of silence. Of course you want participants to think aloud — but at the same time, you need to allow participants space to read, make judgements and generally think about what they are doing.
You can avoid this trap by learning to embrace the silence. Ask participants to do the task. Then shut up, observe and listen to what they say. If you feel the urge to speak, use a phrase like, “Tell me more about that”. If you force yourself to use the same stock phrase, and none other, it will help you stay silent (you’ll sound stupid if you use it incessantly to fill the silence) — and you won’t do too much damage because you’ll encourage the participant to talk.
Explaining the design
If you ever find yourself saying to a test participant, "What the developers are trying to do here is…", or "The reason they designed it this way is because…" or "What you don't understand is…", then you should slap yourself. When you explain the design of your system to a test participant, it causes two problems.
First, you’re no longer able to find out how someone will really behave when they first encounter the design. This is because you’ve given the participant some background information that real users probably won’t have.
And second, even if you were never involved in the design of the system, you affiliate yourself with it. Because what the participant hears isn’t an ‘explanation’ of the system but a defence of the system. This prevents you being seen as a neutral observer and makes it more likely that participants will self-censor their comments.
The point where this problem occurs most frequently is during the test tasks themselves. The participant may use the the system the ‘wrong’ way and the moderator feels the need to explain how to use it ‘properly’. Or the participant may be critical of something in the interface, and the moderator feels the urge to defend the design with a phrase like, “The design team thought about doing it that way, but…” Or the participant may completely misunderstand something in the interface, at which point the moderator will want to correct the participant’s misunderstanding. In particularly bad situations, this moderating style risks turning the usability test into a coaching session, or even an argument.
Believe me, no moderator ever won an argument with a test participant.
If you ever feel the urge to explain the interface or use a phrase like, “Yes, but…”, then instead say, “Tell me what you’re doing right now”. You’ll then get behind the behaviour without influencing it too much. If you really, really want to explain how to use the system or correct any misconceptions, then wait until the end of the session, once participants have tried it without your help.
Here’s another trap I see moderators walk into. It’s like watching a slow-motion replay of a dog chasing a stick over a cliff. The participant sets the trap and the moderator stumbles into it.
Like most traps, it seems fairly innocuous. The participant simply asks a question.
Participant questions are like gold dust. You want participants to ask questions because this indicates they are experiencing a problem with the system: they’re not sure how to proceed, so they ask you.
Gold dust, but not gold.
You find the gold by observing how the participant answers their question: what do they do to solve the problem? Do they find it easy to fix or do they consistently take the wrong path? It’s their behaviour that helps you distinguish a low priority problem from a critical one. This means the route to the gold is to refuse to answer the question.
But to any normal human being, refusing to answer a question is alien. From childhood, we’re conditioned to think that ignoring a question makes us appear either rude or stupid. That’s why so many test moderators walk blindly into the trap of answering participants’ questions.
Here’s the way to fix this in your own practice. First, in your preamble, tell participants you want them to ask questions but you won’t answer, because you want the session to be realistic. This then gives you permission not to answer any questions you’re posed.
Then, when the inevitable question comes at you during the session, use the “boomerang” technique: answer the question with a question. So, if the participant asks, “How do I get back to the beginning?”, you respond: “How do you think you get back to the beginning?” If the participant asks, “Whereabouts is the registration form?”, you reply: “Where would you look for it?” Of course, if during a session the participant asks, “Whereabouts is the bathroom?”, you don’t say, “Where do you think you’d find the bathroom?” — but other than that, answer a question with a question.
Interviewing rather than testing
If you’ve invested time in getting participants to attend your session, it makes sense to get as much out of them as possible. So you should certainly run a pre-test interview with participants before they start the test tasks to find out more about them and their relevant goals. But while the participant carries out the test tasks — which should represent the bulk of their time in a usability test — you’re an observer.
Here’s a common situation that causes a usability test to degrade into an interview: when the design team don’t know much about users. The team may not have done any field research in the past and want to milk this session for all its worth. This shows itself when the participant is interrupted mid-task and asked questions about the way they do this task at home. Or when the marketing lead asks you to shoe-horn in a shopping list of questions during a task. As a consequence, the research falls between two stools: it’s neither a field visit nor a usability test.
Another situation where this can happen is when you have a particularly loquacious participant who wants to engage the moderator in conversation, rather than do the tasks. The participant will continue to look over to the moderator for reassurance and try to make eye contact.
The best approach is to prevent this problem from happening in the first place. Adjust your body language to be more of an observer than an interviewer. Position yourself so you are behind and to one side of the participant. If you sense the participant looking towards you, pretend to take notes and decline the offer of eye contact.
Also make it clear to the design team that you’ll run a post-test interview to get an overall assessment and encourage comments regarding topics not raised during the session, and that’s where you’ll cover their shopping list of questions.
How to continuously improve as a test moderator
These mistakes almost always occur in novice test moderators as they earn their spurs. But even experienced test moderators make these kinds of mistake during a usability test. The best way to avoid mistakes is to continuously reflect on your own moderating skills. After each usability test, look back over the recordings, especially sessions that you feel went particularly well or badly. Make it part of your personal development to identify 3 things you can build on or that you could have done better.
About the author
Dr. David Travis (@userfocus on Twitter) is a User Experience Strategist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, why not join the thousands of other people taking his free online user experience course?
Love it? Hate it? Join the discussioncomments powered by Disqus
Foundation Certificate in UX
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most recent articles
Our most commented articles
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 16 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 17 articles tagged ethnography
- 14 articles tagged expert review
- 2 articles tagged fitts law
- 4 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 11 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 11 articles tagged iterative design
- 2 articles tagged legal
- 11 articles tagged metrics
- 3 articles tagged mobile
- 8 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 17 articles tagged selling usability
- 12 articles tagged standards
- 47 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 56 articles tagged usability testing
- 3 articles tagged user manual