Minting ideas

You may have heard the gnomic utterance that 'a usability test won't improve usability'. This is because usability tests are good at finding problems but offer little in the way of solutions. To arrive at a solution, we need to take three further steps:

  • Generate insights: what is the underlying problem that led to our observations?
  • Develop hypotheses: what do we think is causing the underlying problem?
  • Create design solutions: what's the simplest change we can make to fix the underlying problem?

Start with data

The output from a usability test takes the form of 'observations'. An observation is an objective description of something that you saw or heard during the test. An observation is not your interpretation of what's behind the problem or how you think the problem can be fixed.

An observation could be a direct quotation, a user goal, a user action, a pain point or anything that surprised you. Here's some examples:

  • Direct quotation: "I don't know what this 'subscribe' button means. Will it be a recurring payment?"
  • User goal: Plans his delivery route around lunch wants to be near a bakers or supermarket at noon.
  • User action: When starting a new expense claim, she retrieves her previous claim and uses it as a template.
  • Pain point: Working for longer than 30mins at a stretch is difficult because of battery life.

In contrast, this is not an observation: 'We need to change the "subscribe" label to "sign up".' That's a design solution an interpretation of what's behind the problem which may be wrong. We'll return to solutions later, but right now we need to keep our data 'clean' by ensuring we focus on objective observations.

Generate insights: what is the underlying problem?

A usability test can generate a large number of observations: it's been likened to drinking from a fire hose. So we need to start by whittling down our observations.

Our first step is to remove duplicate observations: situations where you made the same observation with two or more participants. Before discarding the duplicates, make a note of the number of participants where you made this observation as this is useful for later prioritisation.

Next, discard observations that aren't important or relevant (although you may want to come back and review these later). An example might be: "User says that she tends to use this web site at home when it's more quiet". This is an interesting factoid but it's not going to help us identify usability strengths or weaknesses.

These steps will reduce your initial list of observations but it's still likely you'll have a hundred or so to sort through. To make serious in-roads, you'll need to create an affinity diagram: this entails placing the observations into logical groups.

You can seriously speed up the time it takes to identify logical groups by getting the design team to help you. Write each observation on a sticky note and ask team members to do the affinity sort on a whiteboard. This also helps team members (who may have observed just one or two sessions) get exposed to the range of data you collected during the test.

What makes a 'logical group' is up to you, but as a rule of thumb, if you have 100 observations, you should not have 100 groups. I typically find that I end up with somewhere between 10-15 groups. For example, I may have one group of observations about 'terminology', another about a very specific UI element that made people struggle and another group about problems navigating the system.

Once you have your groups, start creating insights. An 'insight' captures the thing you've learnt from this cluster of observations. You should write each insight as a sentence with a point of view. Think of it as a headline in a newspaper or a conclusion in a report.

Insight statements should be provocative: deliberately causing a strong reaction. One insight might read: "Users don't like the way the search function deletes the search query when it shows the search results." Another might read: "Users don't use the same names for things as we do." A third might read: "Users don't understand the workflow they want to backtrack."

Creating insight statements is a reminder that part of your role as a user researcher is to continually act as an irritant to the design team. Harry Brignull makes this point when he writes, "A researcher who is keen to please the design team is useless." You should never let the design team become complacent about your product's user experience; strongly worded insight statements help the team appreciate the work that's still to be done.

At this point, you should step back and look at your affinity diagram. Get the team to dot vote to agree the high priority issues: these are the issues that need to be addressed first in any revised design. It's foolish to try to fix everything, so identify the top three issues and then move on to fix those.

Develop hypotheses: what's causing the problem?

Sometimes, when you look at one of your top three insights, the fix may be obvious. For example, with an insight like, 'People struggled to read the light grey text on several screens', the fix is crystal clear. But this is unusual. Most of the insights from our analysis could have several root causes.

For example, I recently ran a usability test of an app that mimicked Google's Material Design guidelines. The app had a vertical ellipsis (a set of three vertical dots) on the top right of the user interface. Clicking the dots opened a 'more actions' menu. If you use Chrome, you might be familiar with this kind of control (click the image for a larger view).

The Material Design Ellipsis

An insight from our usability test was that our users did not interact with this navigation menu within the app. What might be behind this?

Here are some possible hypotheses that occurred to me:

  • People don't realise it's a control. They think it's branding or a visual design conceit.
  • The control is hard to see. It's on the far right of the screen. People read left to right and don't look there.
  • People see the control but they assume it does something else, like expand the page. It doesn't look like a menu control.
  • People don't need the menu: they can do everything they need to with the existing navigation options.

We don't know which of these if any is correct. We may have a gut feeling that one of them is more likely than another, but without data it's precisely that a gut feeling. How can we turn hypotheses into testable design changes?

Create design solutions: what's the simplest change we can make to fix the problem?

In his book, 'Rocket Surgery Made Easy', Steve Krug argues that when fixing usability problems, you should try to do the least you can. He argues for a 'tweaking' approach where you ask: "What's the smallest, simplest change we can make that's likely to keep people from having the problem we observed?"

I like this approach because it avoids the major redesign and it fits well with the rapid, iterative approach used by most Scrum teams. The point is that there's almost always something you can do to mitigate the impact of the problem on users.

Another benefit is that with this approach, you can often fix a problem in a few days. This compares with weeks or months for a complete redesign.

Here's how we can use this approach to generate testable design ideas from our initial list of hypotheses.

Generating hypotheses and testable design ideas from the usability test insight "Users did not interact with the ellipsis navigation menu"
Hypothesis Simplest change to prevent people from having this problem
People don't realise it's a control. They think it's branding or a visual design conceit. Make it more clickable, for example draw an outline around it and have a drop shadow.
The control is hard to see. It's on the far right of the screen. People read left to right and don't look there. Place the control on the left-hand side.
People see the control but they assume it does something else, like reload the page. It doesn't look like a menu control. Replace the three dots with a word like 'MENU'.
People don't need the menu: they can do everything they need to on the main screen. Ask people to do tasks that require use of the menu, or remove some options from the main screen and re-test

The point is that each hypothesis would lead to a different change to the design.

As a user researcher you are a fully paid up member of the design team

User researchers are sometimes reluctant to take these steps, believing that 'design' isn't their job. More than one user researcher has told me that they are simply told to report the observations from a test to the design team and not provide recommendations for improvement. Failing to provide design solutions risks signalling that you are only useful as a critic and not as a problem solver. Conversely, offering realistic and actionable solutions increases your credibility and helps position you as a design partner.

Although you may not have 'design' in your job title, as a user researcher you are as much a part of the design team as an interaction designer, a visual designer and a front-end developer. This is because you have the kind of insight that can be gained only by sitting next to users as they struggle with the product.

Often the design team know there is something wrong with a design. But they don't know how to proceed or they have got stuck on a solution that's sub-optimal, or they are thinking of it only from an engineering or visual design point of view, or they risk implementing a solution that creates a worse problem. So they need your knowledge about users to uncover the ideas that might fix it.

About the author

David Travis

Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher. If you like his articles, you might enjoy his free online user experience course.



Foundation Certificate in UX

Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details

Download the best of Userfocus. For free.

100s of pages of practical advice on user experience, in handy portable form. 'Bright Ideas' eBooks.

Related articles & resources

This article is tagged usability testing.


Our services

Let us help you create great customer experiences.

Training courses

Join our community of UX professionals who get their user experience training from Userfocus. See our curriculum.

David Travis Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher.

Get help with…

If you liked this, try…

Get our newsletter (And a free guide to usability test moderation)
No thanks