A usability test provides us with a host of observations describing how people interact with a product or service. What it doesn't provide is design solutions. To generate useful design solutions we need to first generate insights to identify the underlying problems and then create testable hypotheses to fix their cause.
The concept of strength of evidence plays an important role in all fields of research, but is seldom discussed in the context of user research. We take a closer look at what it means for user experience research, and suggest a taxonomy of research methods based on the strength of the data they return.
A common myth in usability testing goes like this: '5 participants are all you need to get 85% of the usability problems.' Understanding why this is a myth helps us generate ideas to help us increase the number of problems we find in a usability test.
A knowledge of psychology can help user researchers be more effective when they plan research, make observations, analyse data and present the results.
Recently, Todd Zazelenchuk, David Travis and I met up at our favourite watering hole in Staffordshire. As is often the case we stumbled into a discussion about usability, this time mulling over the question of whether usability is or is not a science. It turned out to be a slightly more challenging question than we had expected. Cut to the pub—
Fundamentally, all user research answers one of two questions: (a) Who are our users and what are they trying to do? (b) Can people use the thing we've designed to solve their problem? You answer the first question with a field visit and you answer the second question with a usability test.
Gaining informed consent is a cornerstone of the social sciences. But it is sometimes poorly practiced by user researchers. They fail to explain consent properly. They mix up the consent form with a non-disclosure agreement. And they mix up the consent form with the incentive. Improving the way you get consent will also improve the data you collect because participants can be more open and because it makes user researchers more empathic.
2 May, 2016 - It is the 30th anniversary of the creation of the most used questionnaire for measuring perceptions of usability. The System Usability Scale (SUS) was released into this world by John Brooke in 1986. It has become an industry standard with references in over 600 publications.
A usability test is the wrong research method when you want to discover if there's a real user need for your product; when you want to understand the environment where your system is used; and when you want to find out how people use your product in their daily lives. So why do I almost always recommend a usability test as a team's first user research activity?
We take a look at some subtle yet pervasive experimenter effects, at ways they can bias the outcome of a design experiment, and at what we can do to control their influence.
What are the most common mistakes that test moderators make? I’ve observed usability tests moderated by consultants, in-house researchers, junior user researchers and experienced practitioners and there are some common mistakes I come across time and again. These mistakes are like a rite of passage on the route to becoming a user researcher, but even experienced practitioners aren’t immune from making them.
Over the last few months, I've worked with three clients who have each adopted a different approach to usability evaluation. These approaches are like different lenses used to observe the customer experience. No single approach is adequate on its own, but in combination the three approaches form a powerful strategy.
Most companies would claim to design products and services that are simple to use. But when you ask customers to actually use these products and services, they often find them far from simple. Why is there a disconnect between what organisations think of as "simple" and what users actually experience?
This month, I had the pleasure of being on a panel on usability testing at the UXPA with Rolf Mohlich, Steve Krug, and Jakob Biesterfeldt. I was asked to take a deliberately provocative view on 5 issues in usability testing. In this article, I'll argue that these 5 common beliefs about usability testing are false.
With the advent of Lean UX — a kind of science of design — the ability to design and conduct an experiment should now be an important part of every designer’s skill set. But what is a design experiment? How do you develop an experiment? And how can you trust the results?
Usability testing is widely accepted as the de facto method for finding usability problems with a user interface. However, test sessions can suffer from a significant ‘observer effect’. This article describes some of the evidence for the observer effect along with some suggestions for ameliorating it.
UX debrief meetings are sometimes viewed as little more than a way to wrap-up a project. This is a mistake. A UX debrief meeting can accomplish much more than just tie a bow on the project. But it's easier to get a debrief meeting wrong than it is to get it right — as I painfully discovered during the debrief meeting from hell.
The Usability Test Plan is a critical document to help you manage and organise a usability test. But it can sometimes appear too documentation-heavy in agile environments. What would a usability test plan look like if it was re-envisioned as a single page?
The Michelin-starred chef and restaurant troubleshooter can teach us a thing or two about providing design criticism, although some of it you may wish to avoid.
The most common types of usability test are remote usability tests, corporate lab-based tests, contextual usability tests and rented facility tests. What are the relative strengths and weaknesses of these different approaches to usability testing and how should you choose between them?
Failing to speak the user's language is an easy trap to fall into because you may not know the user's vocabulary and because technical terms often become second nature to the design team. As with much of user centred design, the secret lies in getting closer to your users so you can empathise with them.
There’s no better way to get feedback on the usability of your mobile app than by running a usability test. Although the process is the same as when testing a desktop app, there are quite a few differences in the details. Adjust your test to take account of these differences and you’ll be better placed to identify the real problems that real users will have with your app when used in an authentic context.
Making user experience happen within an organisation requires development teams to start involving users. This can be a difficult prospect for teams who have not engaged with users in the past. Here are 10 suggestions to help you make that first all-important contact with users.
There are two different types of usability test and each has different aims. Which test you choose will have implications for the number of participants you test, the methodology you use and the way you log, analyse and present the data.
A common concern of people running usability tests is that sooner or later they’ll run into a difficult participant. Who are these difficult characters and how can we prevent them from being a problem?
It's easy to create a mobile prototype on a desktop computer. What's not quite so easy is to usability test the prototype and still allow the participant to use mobile interaction gestures like long presses and two finger pinches. We can overcome this obstacle by combining Axure's mutually exclusive non-conditional cases with Wizard of Oz usability testing.
In ‘The Lean Startup’, Eric Ries describes a design process to help manage risk when developing new products and services under conditions of extreme uncertainty. This article describes three established user experience techniques we can use to support this design process: narrative storyboarding; paper prototyping; and the Wizard of Oz.
The new year is as good a time as any to review and improve the way you work. With a good user experience now widely seen as the key attribute of many high-tech products, it makes sense to review your own products to see how you can give them that user experience edge. Here are 20 quick, simple and virtually free ideas you can apply in 2012.
User experience metrics are measures that help you assess how your design stacks up against the needs of your customers and the needs of your business. Lab-based methods of collecting UX metrics are too slow and expensive to be part of most design projects, especially those using agile methodologies. But with online usability testing tools, regular user experience benchmarking is now cheap and quick to carry out.
In spite of a proliferation of books, articles and blogs explaining how to measure usability, few companies seem to put their usability metrics to good use. In this article we show how you can link the numbers from usability tests to the numbers that steer business decisions — and in the process, influence your company's business.
Many usability tests are worthless. Researchers recruit the wrong kind of participants, test the wrong kind of tasks, put too much weight on people's opinions, and expect participants to generate design solutions.
The magic of usability tests is that you get to see what people actually do with a system (rather than what they say they do). This gives you great insights into how people behave and how to improve your design. But if your tasks lack realism you’ll find that people just go through the motions and don’t engage with the test — reducing the credibility of your results. Here are 6 ways to captivate participants with great test tasks.
It's sometimes said that usability professionals are good at finding problems, but not quite as good at coming up with creative solutions. This article describes a creativity technique called SCAMPER that will help you effortlessly generate dozens of design solutions to any usability problem you identify.
Observing a usability test seems simple but it's easy to lose focus during a session and record only the dramatic or obvious usability problems. As you watch the test, you should make minute-by-minute observations of the participant's behaviour as single letter codes. Datalogging ensures you note all behaviours, not just the ones that stand out, and provides all you need to quickly create a list of usability issues you can pass to the design team. This article includes a macro-free Excel spreadsheet you can use to timestamp your observations.
For most products, it's easy to track down participants for a usability test. But there are some products where end-users are difficult to find and recruit. For these products, it's better to use surrogate users as a proxy for genuine users than not to usability test at all, but you must manage the risks appropriately.
A typical usability test may return over 100 usability issues. How can you prioritise the issues so that the development team know which ones are the most serious? By asking just 3 questions of any usability problem, we are able to classify its severity as low, medium, serious or critical.
If you've been tasked with running a usability test, then you'll love this instructional guide. Aimed at people about to moderate their first usability test, this free graphic instruction guide covers the essential techniques you'll need to moderate a usability test.
Moderation seems effortless but there's a lot more to good listening than opening your ears. Here are 15 suggestions to improve your own listening skills.
There’s no shortage of software that will record videos from usability tests, but how do you put the clips together in a way that will convince management and the design team to take action on your results? Our solution is to use the rule of 5: Create 5 separate highlights videos each focusing on one usability issue, with each issue comprising 5 clips and with each video lasting 5 minutes or less.
We're often told that senior managers don't have the time to read a detailed report describing the findings from a usability test. This means our thoroughly argued, carefully analysed and clearly presented 60-page report could have no effect on improving the product or changing the culture. How can we better engage managers with our data?
"Know thy user" is the first principle of usability, so it's important that you involve the right kind of people in your usability study. These 8 tips for screening participants will show you how to recruit articulate, representative users for your research, quickly filter out the people you don't want and help you avoid the dreaded "no show".
Being frugal during economic hard times is good business practice. So how can you squeeze your usability budget and still deliver great insights? These 10 suggestions for streamlining your usability efforts explode the myth that usability is expensive and time-consuming.
In an unmoderated usability test, a computer automates the process of administering a usability test. This means you can test with much larger samples than with a conventional test, calculate reliable measures of usability and feel confident that you're capturing your customer's context of use.
This eBook contains all you need to make sure that you're fully prepared for your next usability test. The document includes easy to customise usability test forms, such as screeners, a discussion guide, questionnaires and data logging sheets.
ISO have released a new standard for measuring the usability of every day products, like ticket machines, mobile phones and digital cameras. This standard, ISO 20282, includes test methods for quantifying the usability of consumer products to ensure they meet a pre-defined quality level. This development is exciting because the standard's focus on usability measurement reflects a sea change in the evolving practice of usability. In the old world, usability specialists just found usability problems with a design. In the new world, usability specialists also answer the question: "How usable is this design?"
How should you go about collecting data in usability tests? This article examines the data collection process in usability studies and describes some popular data logging solutions. Since most of these tools are expensive, we show you how you can use Microsoft Excel with Visual Basic macros to collect the data.
Most usability tests culminate with a short questionnaire that asks the participant to rate, usually on a 5- or 7-point scale, various characteristics of the system. Experience shows that participants are reluctant to be critical of a system, no matter how difficult they found the tasks. This article describes a guided interview technique that overcomes this problem based on a word list of over 100 adjectives. We also include a spreadsheet to generate and randomise the word list.
Are you a CIO, purchasing officer, or IT manager, about to invest in productivity software for your company? If you are, here's a question you should ask your supplier before you sign on the dotted line: "Just how usable is this product?" Astonishingly, most companies won't be able to answer, and those that try will answer the question only vaguely. But now help is at hand. It's called CIF. And it's about to change the game.
Morae makes it easy to log usability tests, create video highlights and allow observers to view a test in progress. But Morae is designed to support usability tests of software, not paper prototypes. This how-to article shows you how to exploit the full functionality of Morae when carrying out a paper prototype test.
This year marks an important anniversary for people who moderate usability tests. In a classic study carried out exactly 30 years ago, psychologists showed that people are very poor at explaining the reasons behind their choices. This is why usability tests focus on what people do, not on what people say. So why do so many usability test moderators continue to ask participants, "Why"?
It's a truism that even a bad usability test will help improve your software. But the findings from different usability tests are notoriously difficult to compare. This makes it difficult to track usability improvements or to see how you compare against an earlier product. A new international standard looks set to solve this problem.
People often throw around the terms "objective" and "subjective" when talking about the results of a usability test. These terms are frequently equated with the statistical terms "quantitative" and "qualitative". The analogy is false, and this misunderstanding can have consequences for the interpretations and conclusions of usability tests.
Important roads in London are known as 'red routes' and Transport for London do everything in their power to make sure passenger journeys on these routes are completed as smoothly and quickly as possible. Define the red routes for your web site and you'll be able to identify and eliminate any usability obstacles on the key user journeys.
This Excel spreadsheet allows you to measure task completion rates, time-on-task, analyse questionnaire data, and summarise participant comments. Latest version just released!
Usability practitioners are called on, not only to conduct many research studies during their careers, but also to read, review, and advise on usability studies that have been conducted and reported by others. The ability to critically review the research of others, and to help stakeholders weigh up the merits or shortcomings of research data and conclusions, is an extremely valuable skill. These checkpoints will help you ensure your review covers the key issues.
Discount usability techniques are a great way to eradicate usability problems. But they can never answer the question, "How usable is this system?" We blow the dust off some techniques commonly used in the early days of usability testing to see if they can provide an answer.
Rather than create yet another definition of usability, we decided to take a different approach and work through the alphabet, picking one word for each letter to capture the flavour of the field. So we proudly present the A-Z of usability or usability in 26 words.
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 16 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 17 articles tagged ethnography
- 14 articles tagged expert review
- 2 articles tagged fitts law
- 4 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 11 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 11 articles tagged iterative design
- 2 articles tagged legal
- 11 articles tagged metrics
- 3 articles tagged mobile
- 8 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 9 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 17 articles tagged selling usability
- 12 articles tagged standards
- 47 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 21 articles tagged tools
- 57 articles tagged usability testing
- 3 articles tagged user manual
User Experience Articles
Our most recent articles
Our most commented articles