Usability test tasks are the beating heart of a usability test. These tasks determine the parts of a system that test participants will see and interact with. Usability test tasks are so critical that some people argue they are even more important than the number of participants you use: it seems that how many tasks participants try, not the number of test participants, is the critical factor for finding problems in a usability test.
But for test tasks to uncover usability problems, usability test participants need to be motivated: they need to believe that the tasks are realistic and they must want to carry them out. So how do we create test tasks that go beyond the mundane and engage participants?
To help our discussion, I’m going to classify usability test tasks into 6 different categories. You don't need to create tasks in each of these categories — you simply need to review the categories and decide which kind of task will best motivate your participants.
The 6 categories are:
- Scavenger hunt.
- The Reverse Scavenger hunt.
- Self-generated tasks.
- Part self-generated.
- ‘Skin in the game’ tasks.
- Troubleshooting tasks.
Let's look at each of these in a bit more depth.
This type of task is a great way for you to find out if users can complete tasks with your system. With a scavenger hunt task, you ask users to do something that has one clear, ideal answer: an example of this kind of task (for a web site that sells luggage) might be: “You’re travelling abroad next month and you’re looking for a good-sized bag that you can take on as hand luggage. You want the bag to be as big as possible while still meeting the airline’s maximum luggage dimensions (56cm x 45cm x 25cm). You have a budget of £120. What’s the most suitable bag you can get?” With a good scavenger hunt task there will be one perfect answer, so quiz the design team to find out the best solution to this task and then see if participants can find it.
The Reverse Scavenger hunt
With this type of task, you show people the answer — for example a picture of what they need to look for — and then ask them to go about finding or purchasing it. For example, if you’re testing out a stock photography application, you could show people an image that you want them to locate and then ask them to find it by creating their own keywords. This kind of task works well if you think that a textual description of the task might give away too many clues.
Scavenger hunt and reverse scavenger hunt tasks work well when you know what people want to do with your web site. But what if you’re less sure? In these situations, try a self-generated task instead. With this type of task, you ask participants what they expect to do with the site (before you show it to them), and then you test out that scenario. For example, you might be evaluating a theatre-ticketing kiosk with regular theatre-goers. You begin the session by interviewing participants and asking what they expect to be able to do with the kiosk. For example, you might hear, ‘book tickets for a show’, ‘find out what’s on’ and ‘find out where to park’.
You then take each of the tasks in turn, and ask the participant to be more specific. For example, for the task, ‘book tickets for a show’, you’ll want to find out what kind of shows they prefer, such as a play, a musical or a stand-up routine. How many tickets would they want to book? On what day? For an evening or a matinee performance?
Your job is to help participants really think through their requirements before letting them loose with the system, to make sure that the task is realistic.
These tasks work well when you have a good idea of the main things people want to do with the site, but you’re less sure of the detail. With a part self-generated task, you define an overall goal (for example, ‘analyse your electricity usage’) and then ask the participant to fill in the gaps. For example, you can do this by asking participants to bring data with them to the session (such as electronic versions of past electricity bills) and allowing them to query their own data in ways that are of interest (for example, ‘what are my hours of peak usage?’)
‘Skin in the game’ tasks
A problem with usability test tasks is that you want participants to carry out the tasks as realistically as possible. But there’s a big difference between pretending to buy a holiday in Spain and really buying a holiday in Spain. No matter how well intentioned they are, participants know that, if they get it wrong, there are no consequences. You can mitigate this risk by giving participants real money to spend on the task.
The easiest way to do this with an e-commerce web site is simply to give participants a redeemable voucher to spend during the test, or reimburse their credit card after they have made a purchase.
A related approach for other systems is to incentivise the participant with the product itself. For example, if you’re testing a large format printer that creates photographic posters, you could ask people to bring in their digital photographs and then get them to use the printer to create the poster they want. The poster itself then becomes the participant's incentive for taking part.
As well as getting as close as possible to realistic behaviour (mild concerns become pressing issues), this approach also gives you the confidence that your participants are the right demographic, since their incentive is based on the very product you’re testing.
Troubleshooting tasks are a special category of test task because people may not be able to articulate their task in a meaningful way. It would be misleading to give a participant a written task that you’ve prepared earlier since by its very nature this will describe the problem that needs to be solved. For example, a mobile phone may display an arcane error message if the SIM card is improperly inserted or a satnav system may fail to turn on. As far as the user is concerned, the product is simply not working and they don’t know why.
For these situations, it makes sense to try to recreate the issue with the product and then ask the user to solve it — either by starting the participant at Google or at your company’s knowlegebase articles. You’ll then get great insights into the terminology that people use to describe the specific issue, as well as seeing how well your documentation stands up to real-world use.
Thanks to Miles Hunter for coming up with the idea of a taxonomy of usability tasks.
About the author
Dr. David Travis (@userfocus on Twitter) holds a BSc and a PhD in Psychology and he is a Chartered Psychologist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, you'll love his online user experience training course.
Love it? Hate it? Join the discussioncomments powered by Disqus
Web Usability: An Introduction to User Experience
Jan 12-13, London: A 2-day immersion seminar that will show you how to boost sales and conversion rates, increase usage and improve customer satisfaction.More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 11 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 7 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 3 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 7 articles tagged iterative design
- 3 articles tagged layout
- 1 article tagged legal
- 10 articles tagged metrics
- 3 articles tagged mobile
- 6 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 7 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 14 articles tagged selling usability
- 12 articles tagged standards
- 36 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 20 articles tagged tools
- 45 articles tagged usability testing
- 3 articles tagged user manual