Get hands-on practice in all the key areas of UX and prepare for the BCS Foundation Certificate.
When I run training courses in user research, I get a host of questions that span the range from "What's the difference between a field visit and a usability test?" through to "How do you analyse the data?" Here are my answers to 7 of the most common questions.
Photo by Benjamin Davies on Unsplash
A usability test focuses on how people do specific tasks and the problems they experience when using a particular system. If you go to someone’s site to run a usability test, that doesn’t mean that you’re doing contextual inquiry. You’re doing an on-site usability test.
In contrast, a contextual inquiry focuses on the big picture: how people do their job, the workflow across multiple channels, their behaviours, needs, goals and pain points.
With a field visit you are trying to understand the user experience — this is bigger than the usability of a particular system. You might also carry out a contextual inquiry when there is no system in place yet — for a new kind of service. So you couldn’t usability test it even if you wanted to.
Whenever you do qualitative, small sample research, there’s always a risk that your sample will not be representative. So you have to take certain precautions to minimise the risk.
As a first step, speak with people in your organisation who already have a rough idea of the user base (even very rough is OK) and try to identify the factors that you want to balance in your interviews. For example, for a mobile app: will it be used on Android and iOS or will it be used mainly on iOS? If they use both platforms, you'll need to make sure you balance those groups in the research. You should also attempt to balance other factors that people could use to question the validity of the research (for example, aim to get a mix of gender and different age groups).
But note that there will always be edge cases: such as the person (say) who will use your app on a jailbroken iPhone. The aim is not to get 100% coverage. You simply want to identify the main user groups, who probably represent 80% of the user base.
Whenever I work on a project where I get resistance to the idea of small sample research, I offer to validate my findings with a large-scale survey. So I’ll use the small sample contextual inquiry to find the key user groups and then I’ll use a large sample online survey to test out the findings. Sadly, you can’t start off with the large scale survey because you don’t know what questions to ask until you’ve done the field research!
Getting lost in the conversation isn’t always a bad thing. Remember these are qualitative interviews, so it’s OK to go off-piste occasionally. But one technique I’ve seen used is where interviewers use a kind of visual mind map poster that they keep on their lap. This contains the focus question in the centre and then possible explanation areas radiating from it. A quick glance at your mind map reminds you about your focus.
If you’ve nothing developed, you simply ask, “How do people satisfy this need at the moment?”
For example, let’s say your company makes leather shoes, but in the past you’ve only ever sold to retailers. Your job now is to find out how customers currently shop for these types of shoe. You might go into high street shoe shops and observe people. You might go into people’s homes and see how they shop for shoes and other products online. You’ll want to understand how shoe buying fits in with their other activities: is it a once-a-year activity are are they obsessed with shoes, like Imelda Marcos? Do they buy shoes only for themselves or as a gift? What are the pain points about buying online (e.g. “I’m worried that they won’t fit and then I’ll have the hassle of returning them.”) This then gives you valuable design insights (e.g. you may want to have a returns service where you go to the house and collect the shoes).
Maybe it would help if I gave an example.
Imagine that I'm designing a new app that allows people to print postage labels on parcels.
I've not done any user research yet. It was simply an idea I came up with when I visited the Post Office last week to send a parcel to a family member. The queues drove me mad. I had to wait 20 minutes to send the parcel. Surely there's a better way?
However, I need to validate this idea. I need to discover if people really have a need to bypass the Post Office queues by printing their own labels, or is this just a problem for me? If it's a real need, they are probably already doing it (maybe getting someone else to drop off the parcel for them, or maybe waiting until the end of the week when they have all of their parcels together so they do one trip to the Post Office). So let's go out and find these people and see what they really do.
Let's say we interview people in the queues at Post Offices to see if this is a need. Our research might show that most people don’t have a need for this as they send so few parcels. But we came across one user who loved the idea: an eBay seller. Mmm. Let's see if this is a trend. Now we go out to interview only eBay sellers to see if it's a proper need.
It is! They want to buy it now! Yay! Assumption proved!
However, they don't have a set of parcel scales at home. If they can't weigh the parcel, they won't know how much postage to put on it.
What if we give people a set of scales? Now we have another assumption: will eBay sellers sign up for a service that requires them to give enough personal details for us to ship them a set of scales? Another assumption to test.
And so on… Identifying user needs is the same. Whenever you feel an assumption coming on, go out and test it.
I’d be drummed out of the usability club if I didn’t admit that some people that work in the field say that it is OK to create these “assumption personas”. These personas are not based on any research but on the beliefs and preconceptions that the design team have of their users. In their book, “The Persona Lifecyle”, John Pruitt and Tamara Adlin discuss assumption personas in depth. But even they conclude that “Building personas from assumptions is good; building personas from data is much, much better."
As I see it, there are risks and benefits with this approach.
One risk is that you end up with a stereotype, rather than an archetype. The team gets even more blinkered in its thinking.
Another is that the design team loses the enthusiasm to develop personas as they already have one. Especially since real personas take time and money to develop.
On the other hand…
One benefit is that preconceptions exist. Assumption personas get these beliefs out in the open where they can be questioned.
Assumption personas will also highlight conflicting assumptions and so may act as a driver for field research.
But when I work for a client that suggests building assumption personas rather than collecting real data, an alarm bell goes off. What this tells me is that for one reason or another it’s not going to be easy to get to their users. This could be because the users are genuinely hard to find (e.g. brain surgeons who live on another continent) but it’s more likely that the organisation is simply not user centred. So assumption personas become a way for the design team to pretend to involve users — when in fact they are creating them in a way that avoids user contact.
My suggestion is to create the ad hoc personas to flush out the assumptions that your development team have. But then I’d insist (as a basis for working with them) that they test out these assumptions with user research.
I’d like to say there was a scientific method to this, but frankly there’s not (at least, not one I know of). It’s more of a process.
Let me give you an example. After I’ve carried out a handful of contextual inquiries with interviewees A, B, C, D and E, I’ll ask, “In what ways is interviewee A similar to interviewee B? In what ways are they different?” I’ll then do the same for A & C, A & D, and A & E. Then I’ll do the same for the pairwise combinations for interviewees B, C, D and E.
At this point, I’ll often change the way I’m doing the contextual inquiries to make sure that I’m fully exploring these themes with subsequent interviewees. In many ways, these early interviews help me work out what to look for in the later interviews.
Finally, after all of the interviews are completed, I’ll start grading people along these dimensions. This is also part of the analysis too, since at this point I may find that some dimensions are so strongly correlated with other dimensions that it adds no descriptive value. So I’ll then simplify the analysis by dropping that dimension entirely. For example, I may find that a dimension like ‘expert / novice use’ is perfectly correlated with a dimension like ‘comfortable with technology / tech averse'.
Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher. If you like his articles, you might enjoy his free online user experience course.
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and get free, exclusive access to our reports and eBooks.
Our most recent videos
Our most recent articles
copyright © Userfocus 2020.
Get hands-on practice in all the key areas of UX and prepare for the BCS Foundation Certificate.
We can tailor our user research and design courses to address the specific issues facing your development team.
Users don't always know what they want and their opinions can be unreliable — so we help you get behind your users' behaviour.