Problems with asking “Why?”
When running a usability test, it's very tempting to ask participants why they behaved in a particular way. We are naturally curious. "Why did you choose that option?", "Why don't you try searching?", "Why did you pick the 'About Us' link rather than the 'Contact Us' link?"
The assumption behind these kinds of questions is that people can introspect into the reasons behind their behaviour. We think that the participant knows why he chose the 'About Us" link, and all we need to do is to help the participant elicit the reason.
Nisbett and Wilson's classic study
In fact, there is a whole raft of evidence showing that people are very poor at introspection. In a classic study carried out thirty years ago, Richard Nisbett and Timothy Wilson carried out some research outside a bargain store in Ann Arbor, Michigan. The researchers set up a table outside the store with a sign that read, "Consumer Evaluation Survey — Which is the best quality?" On the table were four pairs of ladies' stockings, labelled A, B, C and D from left to right. Most people (40%) preferred D, and fewest people (12%) preferred A.
In fact, all the pairs of stockings were identical. The reason most people preferred D was simply a position effect: the researchers knew that people show a marked preference for items on the right side of a display. But when the researchers asked people why they preferred the stockings that they chose, people identified an attribute of their preferred pair, such as its superior knit, sheerness or elasticity. The researchers even asked people if they may have been influenced by the order of the items, but with just one exception (a psychology student who had just learnt about order effects) nobody thought this had affected their choice. Instead, people confabulated: they made up plausible reasons for their choice.
Confabulating about beauty
In a related study published in Science two years ago, Petter Johansson and his colleagues showed a similar finding. In this study, an experimenter showed a participant two pictures of different women, and asked the participant to point to the most attractive. If you were a participant in this study, you would have seen the experimenter hand you your chosen picture, discard the other photo, and then ask you to justify your choice.
Unknown to participants, the experimenter was a part-time magician, and using a sleight of hand technique he was really showing the participant the picture of the woman rated less attractive. He now asked participants why they had chosen that picture.
Remarkably, even when the photos weren't that similar, the majority of participants (73%) didn't spot that they were now looking at the woman they thought was less attractive. Even more curiously, participants now provided "explanations" for their choice. So for example they might say, "Well, I chose this picture because I like blondes", even though the participant had really chosen a brunette (whose picture was now face down on the table). Just as with the fake consumer evaluation survey, people confabulated: they made up reasons to justify their choice.
What this means for usability testing
Let's now interpret these findings in the light of a usability test. When we ask a participant, "Why did you choose that option?", the participant will introspect and provide an answer. But the participant may not have conscious access to the reason for his or her choice. And as these studies show, participants are unlikely to say, "I don't know". They will provide an explanation for their behaviour, an explanation that they may really believe to be true but which is effectively made up.
This isn't the best basis on which to make product design decision.
Research like this shows that we shouldn't be asking "Why" in usability tests. Instead, your usability test should focus on the definition of usability in an international standard:
“Extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.”
For example, this might mean:
- Can people complete the task?
- How long do they take?
- How many errors do they make on the way?
With its focus on what people do, rather than what people say, usability testing has a unique place amongst consumer research methods. When you ask participants "Why", you are diluting the power of your usability test and you could end up changing your design for all the wrong reasons.
Listen to participant comments by all means — they can make good highlights videos — but remember to support them with behavioural data.
Nisbett, R.E. and Wilson, T.D. (1977). "Telling more than we can know: Verbal reports on mental processes". Psychological Review, Vol 84 pp 231-259.
Johansson, P., Hall, L., Sikström, S. and Olsson A. (2005). "Failure to Detect Mismatches Between Intention and Outcome in a Simple Decision Task". Science, Vol. 310. no. 5745, pp. 116-119.
About the author
Anna-Gret Higgins holds a BSc in Psychology and a PhD Counselling Psychology. She is a Chartered Psychologist and an Associate Fellow of the British Psychological Society. Anna-Gret manages the usability testing team at Userfocus and has logged hundreds of hours in usability tests of public and private sector web sites.
Love it? Hate it? Join the discussioncomments powered by Disqus
Foundation Certificate in UX
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 14 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 13 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 4 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 9 articles tagged iterative design
- 3 articles tagged layout
- 2 articles tagged legal
- 11 articles tagged metrics
- 3 articles tagged mobile
- 7 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 16 articles tagged selling usability
- 12 articles tagged standards
- 42 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 52 articles tagged usability testing
- 3 articles tagged user manual