Get hands-on practice in all the key areas of UX and prepare for the BCS Foundation Certificate.
There’s no better way to get feedback on the usability of your mobile app than by running a usability test. Although the process is the same as when testing a desktop app, there are quite a few differences in the details. Adjust your test to take account of these differences and you’ll be better placed to identify the real problems that real users will have with your app when used in an authentic context.
Usability testing is the gold standard for evaluating user interfaces. Although many people are familiar with usability testing desktop systems or web sites, fewer people have experience testing mobile devices. As we’ve discussed before, mobile is different from desktop and this applies to usability testing too. When you’re testing a mobile device, you need to make some important changes to your testing protocol.
The fundamental steps in running any usability test are the same. Here they are:
So what changes do you need to make when testing a mobile device?
When you test any system for usability, mobile or desktop, you need to answer some basic questions to ensure that you end up testing the right product with the right participants. For every test, you need to answer the classic Five W’s (and one H) of journalism.
The answers to these questions are typically captured in a Usability Test Plan. This is a document that gets everyone — managers, developers and other stakeholders — to discuss and agree on the critical decisions that need to be made. It means that when you present back your findings, no-one questions why you tested the wrong functions, or why you asked the wrong users to do the wrong tasks.
In this step, you’ll go about recruiting your participants. It’s obvious that you want participants to be representative of your end users but it’s just as important that your participants are regular users of the platform that you’re testing. If you’re testing an app for Android, but recruit predominantly iPhone owners, don’t be surprised when your app performs poorly in testing.
Here’s an example of why that matters. About 15 years ago I ran a usability test of a fingerprint identification system for the UK Home Office. My first participant — a fingerprint expert at Scotland Yard — really struggled with the system. But it wasn’t because of usability issues with the interface. It was because the user hadn’t used a computer before and was struggling to use the mouse. At one point, he had the mouse upside down and so ‘up’ movements were being translated to ‘down’ movements of the cursor. It’s no surprise he struggled with the system.
Nowadays, this situation is increasingly rare — for example, the majority of computer users have some experience with Windows. And even if they have only ever used a Mac, once a Windows application has been opened, users can rely on a common set of user interface conventions, such as scroll bars, menus and icons to help guide their way. But user interface conventions for mobile are still in their infancy. For example, Android apps tend to include a specific on screen button to refresh the display, whereas iPhone apps have a hidden control: you pull down the screen to refresh. You don’t want your participants spending their time learning the UI conventions of a new platform, so make sure you recruit users with experience of your device.
All usability tests are based on the same idea: you ask people to carry out realistic tasks with a system and then you observe them to see where they struggle. In a test of a desktop system, it’s quite usual to have someone using the system for an hour or so. This is reasonably representative of real-world use because people usually use desktop apps for extensive periods of time to get their work done.
Mobile is different. People may have your mobile app open to occupy two minutes in a queue at the supermarket. Or they may have a very specific question they want an answer to (“Where’s the nearest Chinese restaurant?”) Or they may be using the app in a specific context that is important to emulate.
For example, I was recently looking at several photography books in a bookshop in London. I wanted to check the prices of some of these books on Amazon so I fired up Amazon’s mobile app which allowed me to scan the barcode and check the price. But I was carrying out this activity with a computer bag over one shoulder, a heavy book balanced in one hand, and my mobile in my right hand, trying to scan the barcode in the dim light of a bookshop. I also needed to carry out the task a bit surreptitiously, as I didn’t want the embarrassment of being spotted by a shop assistant who might ask what I was doing. This is very different from running a usability test in a brightly lit lab where I can put the book on a desk. Context matters with mobile.
When testing desktop systems, it’s quite usual for the test administrator to prepare a ‘typical system’ and then ask the participant to work with it. This ‘typical system’ may have a smaller or larger screen than the participant’s own computer and the mouse and keyboard may be slightly different. But it’s not usually much of a stretch to ask the participant to use this system in lieu of the one that they use day-to-day.
Mobile is different. Users customise their mobile device more extensively than they customise their computer and your participant’s configuration may not reflect the standard, out-of-the-box implementation. For example, some apps may not be where you expect them to be on the participant’s phone. Some services (like location services) may be turned off. Asking a participant to use your ‘default’ system could make them feel like they have just rented a car in a country where people drive on the other side of the road: everything’s familiar but it seems to be in the wrong place.
Fortunately, there are several ways to get your prototype onto the participant’s phone so you can test it. The app doesn’t need to be fully coded: for example, you can create an interactive prototype in your favourite desktop presentation application and then export it to the mobile device as a clickable PDF. On an iPhone, you could also use an app like POP ("Prototyping on Paper") which allows you to take pictures of your sketches and then link them together for testing on the phone. You’ll find an increasing number of toolkits that contain all the widgets you need to simulate a real app. There are also some apps around (like Interface) that will let you prototype right on the device itself.
One of the main problems faced by usability testers of mobile devices is mirroring the participant’s screen.
With an iPhone, you can use Airplay to display the screen of the iPhone on an Apple TV-enabled system, or you could hack together Reflector and Silverback. But for other devices, there’s no robust software solution available just yet, which means we’re back to the early days of usability testing where we used cameras to record the screen. There are various solutions open to you: one of the simplest is to mock up a rig out of perspex or Meccano (skip to slide 66) and connect a web camera to it. These are cheap to produce and simple to make, but prepare yourself for screen recordings that are hard to read, especially as the ambient illumination changes.
Test moderation is a lot more challenging with a mobile device. As a moderator, it’s hard — sometimes impossible — to view the participant’s mobile device. Peering over your participant’s shoulder is — let’s be frank — a little bit weird. It also makes participants use the device differently, as they will try to hold it in a way that you can see the screen too.
Because of this you’ll find it easier if you have a remote monitor that you can use that’s mirroring the participant’s screen (even if it's just from a camera pointed at the screen).
I find that one of the quickest and most effective approaches to test observation is to ask someone else to do it for you… Seriously.
Get the design team in the observation room and provide each person with a stack of sticky notes. Whenever they spot a usability issue or observe an interesting finding, they should write it down on a sticky note. Sticky notes have the benefit of being small which means people can’t write much — usually just enough to capture the essence of the observation.
Mobile usability testing needs a lightweight approach to analysing and reporting the results from a usability test. One rapid way of doing the analysis is to assemble the sticky notes from the previous step and ask members of the design team to group and organise the sticky notes on a wall (removing any duplicates). Once everyone is happy with the organisation, provide each group of sticky notes with a name that captures the usability issue.
The important point to remember is that your aim here is to describe the problems, you’re not creating solutions. That comes next.
Usability testing only makes sense if you change the design to fix the problems that you’ve found. Steve Krug has a wonderfully pragmatic approach to this: for each problem, you ask, “What’s the smallest, simplest change we can make that’s likely to keep people from having the problem we observed?”
You then make the change, check you’ve not broken anything else, and see if you’ve solved the problem. I like this approach because it discourages people from undertaking a major re-design of the interface, which can take a long time to complete and often introduces a new set of usability issues to fix.
The steps in running a usability test are pretty much the same irrespective of what you’re testing. But the devil, as they say, is in the details. By taking account of the issues I’ve listed above, you should be able to confidently adapt your current testing practise so that you can run mobile tests alongside your tests of web sites and desktop software.
Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher. If you like his articles, you might enjoy his free online user experience course.
Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and get free, exclusive access to our reports and eBooks.
Our most recent videos
Our most recent articles
copyright © Userfocus 2020.
Get hands-on practice in all the key areas of UX and prepare for the BCS Foundation Certificate.
We can tailor our user research and design courses to address the specific issues facing your development team.
Users don't always know what they want and their opinions can be unreliable — so we help you get behind your users' behaviour.