I was meeting with a potential client a few weeks back who wanted a usability test. “Tell me about your users,” I asked, hoping that I could then use this information as the basis for a recruitment screener.
“Well, it’s aimed at everyone really, so you don’t need to recruit any special kind of user,” came the reply.
Red flag. I was just formulating a response when he said:
“Anyway, that’s good news for you as you’ll need quite a few users!” he said
Putting on my best ‘actually-you’ll-be-surprised-by-this’ face, I said, “Often, we find we can get lots of insight with as few as five people.”
He laughed. “Five!” he said. “Our marketing department use hundreds. How do you seriously expect to get decent results with a handful?”
It turned into a long meeting.
Just sprinkle some UX magic on it
Has this happened to you? With user experience being such a growth field, I find I’m increasingly meeting with people who claim to want some UX magic sprinkled on their user interface but appear not to know many of the basic tenets of UX. Sometimes these people are senior managers in an organisation. Other times they may be project managers.
The problem with these kinds of discussion is that, if you’re not careful, you end up giving your internal or external client what they ask for rather than what they need. And when what they’ve asked for doesn’t give them the results they want, they don’t come back for any future work.
It doesn’t have to be this way. Here’s some of the common objections I’ve heard, along with some tactful ways to help you convince managers — whether they’re inside or outside your organisation — to see the light.
Market research uses hundreds of people. How come you can get answers with just 5?
Market research is based on opinions. Opinions vary from person to person. It would be ludicrous for a political pollster to sample 5 people in an attempt to predict an election. And even if we take a single person, his or her opinions will change over time, depending on what’s in the news, the other experiences they have, and how we phrase the question.
To reduce the inherent variability in opinion data, we need to sample a large group of people. For example, if there are 10,000 people who use our product, and we want to know how many of them think Facebook is easy to use, we’ll need to randomly sample 370 to achieve a sampling error within 5%.
In contrast, user experience research is based on behaviour. Behaviour turns out to be remarkably consistent from person to person. For example, if you watch 5 people approach a door, and 4 of them attempt to pull it when the door needs to be pushed, you know there’s something wrong with the design. You don’t need to randomly sample 370 people to draw this conclusion. You observe that the door has a pull handle, and it’s probably that that’s causing the problem. So you replace the pull handle with a push panel, and see if you’ve fixed the problem.
User experience researchers can get away with small samples because they are looking for behavioural insights, not opinions.
Our product is aimed at everyone, so we can use ourselves as users
This one contains so many flawed assumptions that you’ll need to take a deep breath before answering it.
First up, we have the ‘aimed at everyone’ assumption. Just because everyone can use your product, it doesn’t mean that everyone will. The downside of a product with ‘something for everyone’ is that it has ‘everything for no one’. Even if your product will be used by a wide variety of users, focusing on a small group of users first will result in a product that’s much more likely to be successful.
The best evidence for this comes from Geoffrey Moore’s ‘Crossing the Chasm’, a marketing book published over 20 years ago but now enjoying a renaissance as part of the Lean Startup movement. Moore shows that whenever truly innovative high-tech products are first brought to market, they initially have some success in an early market made up of technology enthusiasts and visionaries. But most then fall into a chasm, during which sales falter and often plummet. To cross the chasm, high-tech products must first be adopted by niche customers who find the product a total solution to their specific needs. He calls this niche a ‘beachhead segment’ and this is the group of customers you should aim to satisfy first (for example, by developing personas).
The second assumption — that you can use the person at the next desk as your user — is equally flawed. With the exception of intranets, it’s very rare that internal staff are the target market for the product you’re designing. Real users are almost certainly less tech-savvy, much less knowledgeable about the product domain and a lot less tolerant of the product’s shortcomings than internal users will be.
Here’s an interesting story I came across that shows the value of listening to users. In the early 1950s Leo Fender decided to amplify solid steel-string guitars. He wasn’t a guitar player so he interviewed guitarists and watched them play to help him understand what was important.
One guitarist he met, called Dick Dale, kept pushing Fender for an amplified guitar that could be heard over a crowd of people. But every time Fender built one, the guitarist kept breaking it with his playing. Fender couldn't understand why.
One night Fender went down to the Rendezvous Ballroom in Balboa, California with his friend Freddie Tavares. Fender stood in the middle of four thousand screaming and dancing Dick Dale fans and said to Freddie: “I now know what Dick Dale is trying to tell me. Back to the drawing board”. (You can get an idea of Dick Dale's playing style from this 1963 clip on YouTube).
I sometimes wonder if the real inventor of contextual inquiry was Leo Fender.
When managers ask you to design in isolation from users, it's rather like being asked to buy a book for someone else to enjoy reading on holiday. Just because you like a certain author doesn't mean someone else will enjoy reading the book. You'll only be able to get the right book if you know something about the person, either by spending some time with them or by asking questions.
Users don’t know what they want
Perhaps you’ve been in a meeting with someone who repeats the famous Henry Ford quotation: “If I had asked people what they wanted, they would have said a faster horse.” This is normally said with a dismissive wave of the hand, indicating that this is conclusive proof that speaking to customers in the early stages of design has no value.
If you want to get fired, you could just respond that there’s no evidence that Henry Ford ever actually said this. On the other hand, if you want to keep your job and do the research, you should agree. “You’re right,” you should say, “users don’t know what they want. So instead of asking them, I plan to transport them to the future and see what we can learn by watching them use our new concept”.
User experience research isn’t about finding out what people like or dislike. And it's not about asking users to design your interface. It’s about seeing the difficulties users face when trying to use the design you’ve invented.
By their very nature, project managers are often focused on solutions. They are often preoccupied with making decisions on what to build, how to build it, what new features it should have etc… An unintended consequence of this is that they sometimes fail to take the long view and consider the problems they are solving. One role of the user experience practitioner is to help project managers take this long view.
Apple doesn’t do user research so why should we?
This is closely related to the previous objection. Steve Jobs has famously said that Apple “doesn’t do market research” and more recently Sir Jonathan Ive said: “We don’t do focus groups — that is the job of the designer. It’s unfair to ask people who don’t have a sense of the opportunities of tomorrow from the context of today to design.”
There’s nothing new here of course. The problem is that people conflate market research with user research. Apple have found market research methods, like focus groups, to be ineffective ways of finding out what people want from technology or how they’ll use it. (We said the same thing 8 years ago in one of our most popular articles. This is why, unlike most of our competitors, we don’t do focus group research).
But this doesn’t mean Apple doesn’t do user research. In the famous ‘Playboy’ interview in 1985, Jobs said: “We’ve done studies that prove that the mouse is faster than traditional ways of moving through data or applications,” and there’s lots of evidence of usability testing being carried out in the early development of the Mac.
Another of my favourite examples comes from Apple’s User Interface Group. They needed to prototype a portable computer for architects and the first design questions centred on the size and weight of the device. So they stuffed a pizza box with bricks to match the expected weight of the computer and asked an architect to carry it about. They then used techniques from user research to observe how the architect carried the “computer”, noted down the other things he carried, and identified the tasks he carried out. These observations changed the design team’s thinking towards a softer form.
Our agency does all of this for us
By ‘agency’ here I’m referring to the company that designs and implements your web site or product. An agency will typically provide an all-in-one design service that also includes UX research. It’s understandable that managers expect their design agency to have the UX base covered because that’s one of the things they’re paying for.
Now, there are good and bad agencies and I don’t want to tar them all with the same brush. But in my experience there are some flaws behind this assumption.
- An agency doesn’t get paid to please users: it gets paid to please the client. By the time the client has discovered the system isn’t delivering the business benefits expected, the agency has cashed the cheque.
- Clients are often deluded into thinking they know their users well. Agencies are often complicit in this and can get swayed by the client’s view of the user, rather than doing their own research. It’s difficult to tell clients they’re wrong — and even more difficult to ask them for the money to pay for the research to prove it.
- Clients are usually unwilling to pay for multiple iterations of a design, arguing that the agency should get it right first time. Similarly, few clients will pay for follow up research to check that the final design is, in fact, better than the one it replaced.
These 5 objections aren’t exhaustive. There are many others that never made it into this article including:
- “Rather than test with users, we want you to spend only an hour or two and give us some quick feedback”.
- “We can do an online survey and ask people how they work. We’ll get more people for less money than doing field research (and we can include Poland and Germany without having to travel).”
- “Our researchers already go into homes and interview people, so we can do contextual inquiry ourselves.”
- “We already do user research: we recently had 5 teachers in a room and we demonstrated our product and then asked them to talk about it.”
- “We can’t let usability needs dictate the aesthetics.”
There are many misconceptions about user experience, and as a result some project managers still do not fully embrace UX. Why not try preparing your own set of responses to these kinds of objections and start to better educate your clients?
Thanks to Philip Hodgson for the Leo Fender story and for making improvements to this article.
About the author
Dr. David Travis (@userfocus on Twitter) holds a BSc and a PhD in Psychology and he is a Chartered Psychologist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, you'll love his online user experience training course.
Love it? Hate it? Join the discussioncomments powered by Disqus
An Introduction to User Experience
May 12-13, London: Master user experience in this practical, content-rich, hands-on user experience training course. More details
Every month, we share an in-depth article on user experience with over 8,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
- Feb 3: How red routes can help you take charge of your product backlog
- Jan 6: The missing role in your design team
- Dec 2: The UX debrief: A tale of two meetings
- Nov 4: How to wow me with your UX research portfolio
- Oct 7: Does your company deliver a superior customer experience?
- Sep 2: The 1-page usability test plan
Search for articles by keyword
- 7 articles tagged accessibility
- 3 articles tagged axure
- 4 articles tagged benefits
- 11 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 4 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 1 article tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 6 articles tagged iterative design
- 3 articles tagged layout
- 1 article tagged legal
- 10 articles tagged metrics
- 3 articles tagged mobile
- 5 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 6 articles tagged personas
- 13 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 14 articles tagged selling usability
- 12 articles tagged standards
- 34 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 18 articles tagged tools
- 41 articles tagged usability testing
- 3 articles tagged user manual