Why test outside the lab?

Have you ever wanted to run a usability test with people that live or work a long way from your office? Do you have customers who are happy to give you 15 minutes of feedback on your web site but can't devote 90 minutes to take part in a lab-based test? Have you ever wanted to run a usability test with a much larger sample (200+) than in a conventional test?

If your answer to any of these questions is "Yes", then you should consider running a remote usability test. Remote tests are very different from conventional usability tests. With a conventional test, a participant visits your test facility and is taken through usability test tasks by an experienced moderator. With a remote test, participants carry out the test from their home or office and connect to a web site or computer that controls the flow of the test.

One immediate benefit of this approach is that it truly captures the participant's context: the test is carried out in the participant's usual environment using the participant's normal computer, monitor and network connection. This contrasts with the set up in a usability lab, where participants may be using an unfamiliar browser or a network that's much faster than they're used to. Since context is at the heart of usability, any methodology that gets us closer to a user's true context needs to be taken seriously.

The 2 types of remote usability test

There are 2 flavours of remote usability test, depending on whether you moderate the session or have it run automatically by computer.

  • With a moderated, remote usability test you need to set up a testing environment that allows you to view the participant's screen remotely and talk to the participant. Although this sounds tricky, there are a number of screen sharing services on the Internet that make this straightforward. When done well, observers feel like they're observing a traditional usability test, with the participant in the next room.
  • In contrast, an unmoderated usability test is part survey and part usability test. In a typical implementation, a small control panel floats on top of the main browser window, presenting the tasks and managing the flow of the test. The key benefit of this kind of test is that it's easy to test large sample sizes: 200 participants is quite common. This makes them a great tool for collecting robust measures of usability, such as time on task and task completion rate.

Nate Bolt has written an excellent review of remote moderated usability tests, so in the rest of this article, I'll focus on the unmoderated version.

Unmoderated remote usability tests

Because you're not viewing the participant's screen, unmoderated usability tests are less of a technical challenge to set up than remote, moderated tests. But this comes at a cost. If you can't see the test, how can you ensure the participant stays on track and carries out the tasks?

We've recently developed our own web-based system for running remote, unmoderated usability tests for our clients. We use the term "benchmark" usability testing to describe the service, since we think it more accurately captures the key benefit of the method. During the development of our system, we identified and solved a number of obstacles with unmoderated usability tests. Here are 6 tips for running these kind of test based on our experience.

6 tips for running unmoderated remote usability tests

Keep it short

Once an unmoderated test goes beyond 15 mins, we discovered that people are more likely to quit before completing the test. So we recommend constructing a set of tasks that people can complete within this 15-minute window. Although this means that each participant will never be able to get through as many tasks as in a moderated session lasting an hour, it's still possible to get full coverage of the tasks. We achieve this by giving each participant a sub-set of the overall tasks. So for example, if the first participant carries out tasks 1-4, the second participant will carry out tasks 5-8. We also found that we needed to make it clear in our instructions to participants that they can give up during a task and move to the next one without incurring a penalty.

Check participants aren't just in it for the incentive

Allowing participants to quit a task causes its own problem. How will you know if participants are genuinely carrying out the tasks and not just clicking through the screens to get to the end as soon as possible? We've found that the best way is to recruit participants who are genuinely interested in doing these kinds of evaluation. In our usability testing panel, we encourage users to sign up to "make a difference" and make it clear that the incentive or prize draw is really a bit of fun and will never make them rich.

Most of the time, it's sufficient simply to check that participants are spending long enough on a task, and warn them if they're working too fast. For example, if a participant spends less than 30 seconds on a task, our system pops up a warning that reads, "Are you sure you've completed this task? You've only spent a few seconds on the web site. You need to use it properly before answering the question." Once participants know you're checking on them, they will be more likely to take it seriously, since they realise their incentive is at risk. A more thorough approach is to store a participant's IP address along with the exact time that the participant took the test and then cross-check this with the access logs for the web site. Clearly, this is very time consuming so you would only carry out this activity for the small proportion of participants that you suspect of not taking it seriously.

Pilot test the tasks

Since you won't be present to help explain the task scenarios to participants, you need to make sure that the scenarios are crystal clear and free of ambiguity. For example, a task scenario like, "Find the contact details for this company" might work in a moderated test but in a remote test it's not clear precisely what details you want. Is it the mailing address, an email address or a telephone number? Instead, use a task like: "You need to telephone the company to report a problem with your order. What number will you call?" The best way to check for ambiguous or hard-to-understand instructions is to pilot test the instructions with a handful of people and ask them to repeat back their understanding of the task.

Measure behaviour rather than opinions

The great thing about large samples is that you can start collecting robust statistical measures of usability. The bad thing about large samples is dealing with free form comments. This is worth considering when you plan the test. Create "scavenger hunt" style tasks that have a clear "correct" answer. For example, to test out a web site selling mobile phones, you might use a task like: "What is the make and model number of the cheapest 'pay as you go' handset?" When participants come to answer the task you can then provide a pick list from them to choose the answer, which will save a lot of time in analysing the data. These kind of data also make it straightforward to calculate standardised measures of usability: time on task, task completion rate and user satisfaction ratings.

Avoid bias

Even though the test is unmoderated there are still sources of bias that you need to eradicate. In particular, participants speed up once they begin to build a mental picture of the way the site is organised, so make sure you present the test tasks in a random order. Another source of potential bias is in the tasks you create. Make sure you select tasks that give specific areas of the site a good testing. It's tempting to create tasks that simply have "answerable questions" in the way you would write exam questions, but that don't really give the site a good work-out.

Keep it in context

don't think of unmoderated tests as a replacement for the way you currently do usability testing. Whether or not you run unmoderated tests, you should still carry out formative tests of your web site and have participants think out loud while you listen. Unmoderated tests are simply another tool in the box, useful for those situations when you want to really measure the usability of a web site. Rather than a replacement, these kinds of test simply expand the landscape of techniques for obtaining user feedback.

About the author

David Travis

Dr. David Travis (@userfocus on Twitter) is a User Experience Strategist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, why not join the thousands of other people taking his free online user experience course?



Foundation Certificate in UX

Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details

Download the best of Userfocus. For free.

100s of pages of practical advice on user experience, in handy portable form. 'Bright Ideas' eBooks.

If you liked this, try…

ISO have released a new standard for measuring the usability of every day products, like ticket machines, mobile phones and digital cameras. We can now ask: "How usable is this design?" Measuring the usability of everyday products.

Related articles & resources

This article is tagged usability testing.


Our services

Let us help you create great customer experiences.

Training courses

Join our community of UX professionals who get their user experience training from Userfocus. See our curriculum.

David Travis Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher.

Get help with…

If you liked this, try…

Get our newsletter (And a free guide to usability test moderation)
No thanks