There are many findings in psychology that are of general interest to user researchers. But here are 10 that have particularly influenced the way I go about my day-to-day work.
Plan research to avoid bias
There are many obvious sources of bias when carrying out user research. For example, most people are aware of the need to avoid leading questions and to avoid 'selling' participants on the design. But psychology also teaches us that there are many subtle areas where bias can creep into a study. This might take the form of writing down what you thought the participant meant rather than what she actually said. Or during a usability test, using phrases like "Good" or "That's right" when the participant uses the system in a particular way. Or failing to randomise the tasks you ask people to carry out, to control for the fact that participants will approach later tasks in a usability test with more knowledge of the system than with earlier tasks.
Psychologists have been wrestling with the issue of experimental design for some time and every user researcher needs a basic grounding on what to do and what to avoid. For more detail, read Philip Hodgson's article on controlling experimenter effects and for even more detail, try Critical Thinking About Research: Psychology and Related Fields by Julian Meltzoff.
Be wary of using expert judgement as your only source of data
Expert reviews are a great way to find usability bloopers with your product. We have even created a 247-item checklist to help guide an expert review of web sites. However, expert judgement should never be your own source of data. Psychological studies frequently show the limitations of expert judgement. For example, research shows that political pundits are no better than a coin flip in determining the result of an election and wine tasters don't know if they are smelling red or white wine. Part of the the problem is that, in the same way that most people think they are better than the average driver, most experts believe they are better than the average expert. As a consequence, they are lulled into a false sense of security about their own ability. So by all means apply your expertise to make judgments about a design, but be sure to validate these judgements with data.
They aren't many laws in psychology, but here's one you should know
It's not always easy to be conclusive in user research, but thanks to psychology there are at least two common questions asked of user researchers where we can make predictions with some degree of certainty.
The first question is about the number of choices in a user interface. Next time you are faced with a cluttered UI design that offers so many choices that it looks like a menu in a Chinese restaurant, you should invoke Hick's Law (named after psychologist William Edmund Hick). This states that the time taken to make a decision increases as the number of choices is expanded. It predicts that the greater the number of alternative choices, the longer it will take a user to make a decision and select the correct one.
And here's another one
The second question is about the size and location of controls. According to Fitts' law (named after the psychologist Paul M Fitts), the time required to rapidly move to a target is a function of the distance to and the size of the target. This applies to both desktop user interfaces (stepper controls anyone?) as well as touch screen interfaces that sometimes feel like they have been designed for people who have talons for fingers. In practice, Fitts' Law doesn't mean you should just make big buttons but that you should increase the clickable area. For example, allow a user to click on the text label next to a form field to place their cursor in the field for data entry.
We often don’t know why we do things
We like to think that our decisions are rational and made after conscious deliberation. That's why it's so tempting to believe participants know why they behave in a particular way. But people are poor at introspecting into the reasons for their behaviour.
One of my favourite research studies proving this was carried out forty years ago by psychologists Richard Nisbett and Timothy Wilson. The researchers set up a table outside a store with a sign that read, "Consumer Evaluation Survey: Which is the best quality?" On the table were four pairs of ladies’ stockings, labelled A, B, C and D from left to right. Most people (40%) preferred D, and fewest people (12%) preferred A.
In fact, all the pairs of stockings were identical. The reason most people preferred D was simply a position effect: the researchers knew that people show a marked preference for items on the right side of a display (another finding from psychology). But when the researchers asked people why they preferred the stockings that they chose, people identified an attribute of their preferred pair, such as its superior knit, sheerness or elasticity. The researchers even asked people if they may have been influenced by the order of the items, but with just one exception (a psychology student who had just learnt about order effects) nobody thought this had affected their choice. Instead, people made up plausible reasons for their choice.
To read about hundreds of other studies in psychology that tell the same story, read Strangers to Ourselves: Discovering the Adaptive Unconscious by Timothy D Wilson.
In practice, this means that user interviews should never be your only source of data. Good user researchers don't just ask: they watch. You want to observe people because what people say does not always match what they do.
Make sure you know what that eye tracker is measuring
Eye tracking is compelling technology and makes a user researcher feel like a proper scientist. It can be invaluable to answer highly specific questions about a design (such as whether your form label should be to the left or above a form field). But with user interface evaluation, there is very little that you'll discover from eye tracking that you can't discover more easily (and much more cheaply) by observing your users' behaviour.
There are at least two issues with eye tracking that have been known in psychology for many years but still appear to be news to user researchers. The first is that eye tracking measures where the eye is pointing, not necessarily where your attention is focussed. These are not necessarily the same. My wife refers to this as Male Pattern Fridge Blindness when I'm unable to find the jam in the fridge even when I'm staring directly at it. The point is that what we see does not always correlate with what we are attending to.
The second finding is that the way we scan a scene depends on the task we ask people to do. Russian psychologist Alfred Yarbus first noted this as long ago as 1967. The image below shows the different gaze paths of a person asked to do seven different tasks with the same picture. Compare the gaze path in task 3 (where the participant is trying to estimate the ages of the people in the picture) with task 6 (where people are trying to memorise the position of objects in the picture). This means you can't simply present a gaze path as evidence of 'what stands out' to your users.
Images from an eye tracking study carried out by Alfred Yarbus. Image from Wikipedia.
A fine book on this is Eye Tracking the User Experience by Aga Bojko. Although the author doesn't discuss the Yarbus study, she does do a great job of describing when and where eye tracking is useful.
We get tunnel vision when given specific tasks
In the previous section, I pointed out that psychologists have shown that we can look at one thing but our attention can be elsewhere. Here's a related, but contrasting, finding: we can get so focused on a task that we often miss the obvious just outside our area of attention. If you've not tried this selective attention test on YouTube by psychologist Daniel Simons, try it before reading on:
This effect is known as inattention blindness and it's one of the reasons people miss things in a user interface when they are carrying out specific tasks. It's a dramatic demonstration of how attention affects our perception.
This has huge implications for user research. For example, when moderating a usability test it's very important not to begin by giving your participant a tour of the user interface. If the thing isn't seen when doing a task, you need to discover that as part of your testing. Priming people on the features and functions in the system means you'll fail to discover what people miss.
Our memory is fallible
A survey in 2011 showed that 63% of Americans believed that our memory is a bit like a video camera. Presumably, these people thought of our brain as laying down a continual record of the world. People with good memories have easy access to this record and people with bad memories struggle to replay it.
In fact, our memory is both selective and fallible. Psychologist Elizabeth Loftus proved this with her 'Lost in The Mall' studies, where she showed that people could be made to believe and then embellish a totally false but plausible event (getting lost in a shopping mall as a child).
In practice, this means that asking people about their past behaviour could result in data that contains 'confabulation'Ě: made up stories that people believe are true but that never happened. People have a need to tell good stories and so they will make up what happened at certain points rather than leave gaps in their narrative. This is almost impossible to protect against when running user interviews and is a further argument for why you should complement interviews with observation.
You can't analyse qualitative data with a spreadsheet
Most people know the basics of dealing with quantitative data. Give them a few columns of numbers, and they know how to use a spreadsheet package to calculate the mean and the standard deviation. This works well for summative usability test data, where we have measures of time on task and completion rate, and for survey data. But analysing qualitative data from a field visit or analysing usability problems from a usability test require a different approach. This is because qualitative research data cannot be meaningfully expressed numerically. For example, what does it mean to describe the 'average experience' of different people using a Government service to apply for a driving license? Or how would you analyse usability test data to discover the 'average' usability problem?
Qualitative research is about generating deep insights, it's not about making predictions about the population as a whole. Fortunately, psychologists have been dealing with these kinds of data for many years. The basic approach is as follows:
- Get familiar with your data. Listen to the recordings. Read through the transcripts.
- Code your observations. Start with significant findings. Highlight stories, behaviours, pain points, needs and goals. (User researchers often use sticky notes for this step, one sticky note per observation).
- Look for patterns. Starting with the findings that you find most insightful or unusual, cluster similar findings from different participants.
- Look for insights. Dig behind your clusters. A good insight is intuitive and non-obvious at the same time.
Avoid p-hacking when analysing quantitative data
A recent finding in psychology is that a number of researchers don't know how to do statistics properly. The full story is complex, but the practitioner takeaway is that you can't cherry pick your results.
For example, imagine that you gave the System Usability Scale to participants at the end of a remote, unmoderated usability test where you are comparing two different web-based prototypes. 100 people use version 1 of the prototype and 100 different people use version 2.
The correct way of analysing these data is to calculate a single SUS score for each participant, and then carry out a statistical test where you compare the 100 participant scores for version 1 against the 100 participant scores for version 2.
Another way of analysing the data, which is deeply flawed, would be to separately analyse each question across your 100 participants. For example, you could look for a statistical difference on Q1, then Q2 and so on. Because the SUS has 10 questions, this gives you 10 chances to find a statistical difference (rather than the one chance you have with a single SUS score). In fact, using this approach, you would expect 1 of the 10 questions to show a significant difference with a probability of 0.1. This approach is known as 'p-hacking'Ě because you are trawling through your data looking for statistical differences that you will then justify after the fact.
Did I miss your favourite psychological finding that user researchers should know about? If so, let me know in the comments.
About the author
Dr. David Travis (@userfocus on Twitter) is a User Experience Strategist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, why not join the thousands of other people taking his free online user experience course?
Foundation Certificate in UX
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most recent articles
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 16 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 19 articles tagged ethnography
- 14 articles tagged expert review
- 2 articles tagged fitts law
- 5 articles tagged focus groups
- 1 article tagged forms
- 7 articles tagged guidelines
- 11 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 11 articles tagged iterative design
- 3 articles tagged layout
- 2 articles tagged legal
- 11 articles tagged metrics
- 3 articles tagged mobile
- 8 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 17 articles tagged selling usability
- 12 articles tagged standards
- 47 articles tagged strategy
- 2 articles tagged style guide
- 5 articles tagged survey design
- 6 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 58 articles tagged usability testing
- 3 articles tagged user manual