Usability testing is the gold standard for evaluating user interfaces. Although many people are familiar with usability testing desktop systems or web sites, fewer people have experience testing mobile devices. As we’ve discussed before, mobile is different from desktop and this applies to usability testing too. When you’re testing a mobile device, you need to make some important changes to your testing protocol.
The fundamental steps in running any usability test are the same. Here they are:
- Get buy in
- Recruit your participants
- Develop your tasks
- Finalise your prototype
- Set up the testing rig
- Moderate the test
- Observe the test
- Analyse the data
- Improve the design
So what changes do you need to make when testing a mobile device?
Step 1: Get buy in
When you test any system for usability, mobile or desktop, you need to answer some basic questions to ensure that you end up testing the right product with the right participants. For every test, you need to answer the classic Five W’s (and one H) of journalism.
- Why are you running the test?
- Where will it take place?
- When will it take place?
- Who will be the test participants?
- What system (and what functionality) will you be testing?
- How will you collect and analyse the data?
The answers to these questions are typically captured in a Usability Test Plan. This is a document that gets everyone — managers, developers and other stakeholders — to discuss and agree on the critical decisions that need to be made. It means that when you present back your findings, no-one questions why you tested the wrong functions, or why you asked the wrong users to do the wrong tasks.
Step 2: Recruit your participants
In this step, you’ll go about recruiting your participants. It’s obvious that you want participants to be representative of your end users but it’s just as important that your participants are regular users of the platform that you’re testing. If you’re testing an app for Android, but recruit predominantly iPhone owners, don’t be surprised when your app performs poorly in testing.
Here’s an example of why that matters. About 15 years ago I ran a usability test of a fingerprint identification system for the UK Home Office. My first participant — a fingerprint expert at Scotland Yard — really struggled with the system. But it wasn’t because of usability issues with the interface. It was because the user hadn’t used a computer before and was struggling to use the mouse. At one point, he had the mouse upside down and so ‘up’ movements were being translated to ‘down’ movements of the cursor. It’s no surprise he struggled with the system.
Nowadays, this situation is increasingly rare — for example, the majority of computer users have some experience with Windows. And even if they have only ever used a Mac, once a Windows application has been opened, users can rely on a common set of user interface conventions, such as scroll bars, menus and icons to help guide their way. But user interface conventions for mobile are still in their infancy. For example, Android apps tend to include a specific on screen button to refresh the display, whereas iPhone apps have a hidden control: you pull down the screen to refresh. You don’t want your participants spending their time learning the UI conventions of a new platform, so make sure you recruit users with experience of your device.
Step 3: Develop your tasks
All usability tests are based on the same idea: you ask people to carry out realistic tasks with a system and then you observe them to see where they struggle. In a test of a desktop system, it’s quite usual to have someone using the system for an hour or so. This is reasonably representative of real-world use because people usually use desktop apps for extensive periods of time to get their work done.
Mobile is different. People may have your mobile app open to occupy two minutes in a queue at the supermarket. Or they may have a very specific question they want an answer to (“Where’s the nearest Chinese restaurant?”) Or they may be using the app in a specific context that is important to emulate.
For example, I was recently looking at several photography books in a bookshop in London. I wanted to check the prices of some of these books on Amazon so I fired up Amazon’s mobile app which allowed me to scan the barcode and check the price. But I was carrying out this activity with a computer bag over one shoulder, a heavy book balanced in one hand, and my mobile in my right hand, trying to scan the barcode in the dim light of a bookshop. I also needed to carry out the task a bit surreptitiously, as I didn’t want the embarrassment of being spotted by a shop assistant who might ask what I was doing. This is very different from running a usability test in a brightly lit lab where I can put the book on a desk. Context matters with mobile.
Step 4: Finalise your prototype
When testing desktop systems, it’s quite usual for the test administrator to prepare a ‘typical system’ and then ask the participant to work with it. This ‘typical system’ may have a smaller or larger screen than the participant’s own computer and the mouse and keyboard may be slightly different. But it’s not usually much of a stretch to ask the participant to use this system in lieu of the one that they use day-to-day.
Mobile is different. Users customise their mobile device more extensively than they customise their computer and your participant’s configuration may not reflect the standard, out-of-the-box implementation. For example, some apps may not be where you expect them to be on the participant’s phone. Some services (like location services) may be turned off. Asking a participant to use your ‘default’ system could make them feel like they have just rented a car in a country where people drive on the other side of the road: everything’s familiar but it seems to be in the wrong place.
Fortunately, there are several ways to get your prototype onto the participant’s phone so you can test it. The app doesn’t need to be fully coded: for example, you can create an interactive prototype in your favourite desktop presentation application and then export it to the mobile device as a clickable PDF. On an iPhone, you could also use an app like POP ("Prototyping on Paper") which allows you to take pictures of your sketches and then link them together for testing on the phone. You’ll find an increasing number of toolkits that contain all the widgets you need to simulate a real app. There are also some apps around (like Interface) that will let you prototype right on the device itself.
Step 5: Set up the testing rig
One of the main problems faced by usability testers of mobile devices is mirroring the participant’s screen.
With an iPhone, you can use Airplay to display the screen of the iPhone on an Apple TV-enabled system, or you could hack together Reflector and Silverback. But for other devices, there’s no robust software solution available just yet, which means we’re back to the early days of usability testing where we used cameras to record the screen. There are various solutions open to you: one of the simplest is to mock up a rig out of perspex or Meccano (skip to slide 66) and connect a web camera to it. These are cheap to produce and simple to make, but prepare yourself for screen recordings that are hard to read, especially as the ambient illumination changes.
Step 6: Moderate the test
Test moderation is a lot more challenging with a mobile device. As a moderator, it’s hard — sometimes impossible — to view the participant’s mobile device. Peering over your participant’s shoulder is — let’s be frank — a little bit weird. It also makes participants use the device differently, as they will try to hold it in a way that you can see the screen too.
Because of this you’ll find it easier if you have a remote monitor that you can use that’s mirroring the participant’s screen (even if it's just from a camera pointed at the screen).
Step 7: Observe the test
I find that one of the quickest and most effective approaches to test observation is to ask someone else to do it for you… Seriously.
Get the design team in the observation room and provide each person with a stack of sticky notes. Whenever they spot a usability issue or observe an interesting finding, they should write it down on a sticky note. Sticky notes have the benefit of being small which means people can’t write much — usually just enough to capture the essence of the observation.
Step 8: Analyse the data
Mobile usability testing needs a lightweight approach to analysing and reporting the results from a usability test. One rapid way of doing the analysis is to assemble the sticky notes from the previous step and ask members of the design team to group and organise the sticky notes on a wall (removing any duplicates). Once everyone is happy with the organisation, provide each group of sticky notes with a name that captures the usability issue.
The important point to remember is that your aim here is to describe the problems, you’re not creating solutions. That comes next.
Step 9: Improve the design
Usability testing only makes sense if you change the design to fix the problems that you’ve found. Steve Krug has a wonderfully pragmatic approach to this: for each problem, you ask, “What’s the smallest, simplest change we can make that’s likely to keep people from having the problem we observed?”
You then make the change, check you’ve not broken anything else, and see if you’ve solved the problem. I like this approach because it discourages people from undertaking a major re-design of the interface, which can take a long time to complete and often introduces a new set of usability issues to fix.
The steps in running a usability test are pretty much the same irrespective of what you’re testing. But the devil, as they say, is in the details. By taking account of the issues I’ve listed above, you should be able to confidently adapt your current testing practise so that you can run mobile tests alongside your tests of web sites and desktop software.
About the author
Dr. David Travis (@userfocus on Twitter) holds a BSc and a PhD in Psychology and he is a Chartered Psychologist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, you'll love his online user experience training course.
Love it? Hate it? Join the discussioncomments powered by Disqus
Foundation Certificate in UX
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
- Jul 5: The two questions we answer with user research
- Jun 6: What user researchers ought to know about informed consent
- May 2: Measuring Usability With The System Usability Scale (SUS)
- Apr 4: 5 reasons why your first user research activity should be a usability test
- Mar 7: Keeping Yourself out of the Story: Controlling Experimenter Effects
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 12 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 12 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 4 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 9 articles tagged iterative design
- 3 articles tagged layout
- 2 articles tagged legal
- 11 articles tagged metrics
- 3 articles tagged mobile
- 7 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 16 articles tagged selling usability
- 12 articles tagged standards
- 41 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 52 articles tagged usability testing
- 3 articles tagged user manual