Most designers have fairly strong opinions about what makes a good design and what makes a bad design. Those opinions are based on experience with previous projects as well as a ‘design sense’ or design aesthetic. Any designer worth his or her salt can usually look at a product or web site and identify at least one mistake in its design.
One of my favourite web sites is Anne Holland’s ‘Which Test Won’. Every couple of weeks, Anne presents you with a pair of alternative web page designs and asks you to decide which one is best. For example, one design may have the word ‘Search’ written on a search button; the other design might have the same word written in the search field. You vote on which design you think is best.
What makes the site interesting is that Anne knows the answer. This isn’t because she’s a guru designer but because she has the results from an A/B test. With an A/B test, half of your users see the first design and the other half see the alternative design. You then look at the results and see which one met your business objective: for example, which design increased revenue per visitor?
Sometimes I get the answer correct. But other times I get the answer wrong. When I get the answer wrong I look back at the two designs and I can usually convince myself why I got it wrong. Once I know the answer, it’s obvious why version A or version B did best: hindsight, as they say, is 20/20 vision.
The truth is, few designers ever put their design to the test.
There’s a better way.
Good designers benchmark their design decisions against business objectives: does the design make the product more or less successful? Justifying your design in any terms other than the client’s business objectives is a fundamental design mistake.
Here are 6 steps for benchmarking your user experience designs against your client’s business objectives. Not only will these steps help you please your clients but you’ll find you work more productively because you only carry out design activities that really make a difference.
The steps are:
- Decide on a key business objective.
- Identify the UX factors that will help achieve the objective.
- Propose design research to improve the UX factors.
- Measure the benchmark state of each UX factor.
- Track changes in each UX factor until target values are achieved.
- Test if the business objective is being met.
1. Decide on a key business objective
If we’re going to prove relevance, we first need to work out what we’re relevant to. In this step you’ll articulate the business objective behind the product, service or web site that you’re developing. In some cases the business objective will be obvious: for an e-commerce site it might be sales; for a web site selling professional services it might be the number of people who sign up for an email newsletter; for a firm that develops software, it might be the number of people who upgrade to the paid version of a product after downloading the trial version.
If you’re not sure, the rule of thumb is to follow the money. Ask: “How does the organisation make or save money with this product?” For example, for a charitable web site it might be the number of people who make a donation; for a web site that exists solely to promote a brand it might be the number of Facebook ‘likes’; for a government web site it might be the number of citizens who access an online service more than once. If you’re still unsure, then check with the product manager or other stakeholders in the organisation.
Don’t proceed until you have a business objective that is actionable and auditable, like the ones above. A business objective like “Be the number one brand in the social space” is meaningless.
2. Identify the UX factors that will help achieve the objective
In this step, you need to think about the key UX factors that impact the business objective.
To make this more concrete, let’s take a specific example of a project we worked on recently: this was an estate agent’s (realtor’s) web site. The client had a very clear business objective: they wanted to sell their inventory of houses more quickly. The goal of the web site in this process was to increase the number of people requesting a viewing, since the more people that view a house, the more likely it is to sell. That’s our business objective.
Now we can generate some theories for improvement. Let’s examine the web site and consider the possible UX factors that affect the business objective. We’ll probably need to do some initial research — for example, we could find people who used the site but didn’t request to view a house and ask them why, or we could send an online survey to a sample of house purchasers. Then we’ll have some confidence that these are the right UX factors to focus on.
For example, we might decide that the following UX factors directly impact the business objective:
- The ability to easily search for houses by price and location. The rationale for this is that if people can’t find a house in the system, they won’t ask to view it.
- The ease of navigating the 360-deg panoramic room viewer. The rationale for this is that if people can’t easily see pictures of the house, they’ll go to another site where they can.
- The ability to easily compare houses by feature, such as garage or conservatory. The rationale for this is that people may have a shortlist of houses that they want to choose between.
Note that these UX factors are assumptions. Although they seem sensible, we don’t know for sure that improving them will lead to an increase in the number of people that want to view houses. I’m reminded of a story that Eric Ries tells in ‘The Lean Startup’. Designers at the Grockit web site assumed that changing the registration process to support ‘Lazy Registration’ would result in more people using the service. (Lazy Registration is a clever way to let people try out a site before they sign up. You allow people to use all of the core features and even save their data. When they eventually sign up, all of their data is transferred to the new account.)
The designers at Grockit did a test. They compared a group of users who experienced Lazy Registration with another group of users who couldn’t use any functionality until they signed up. If we follow best practice, we’d predict that the Lazy Registration group would do better. In fact, the designers did a test which showed that (for their site) Lazy Registration had no impact on registration, activation and subsequent retention.
This doesn’t mean that Lazy Registration isn’t a good idea in general. It just wasn’t a good idea for Grockit in particular. This makes the point that your UX factors are assumptions that we need to test. The alternative is to spend weeks improving the ability to easily search for houses by price and location (say) only to find it has no impact on the business objective.
3. Propose design research to improve the UX factors
Now, as we move into the design process, we need to think about the specific UX activities that we need to carry out in order to meet our UX objectives. This provides us with a roadmap for design research that we can use to guide development with the UX factors our signposts along the way. The UX activities make the priorities of the release very clear to everyone on the design team.
Assuming that we’ve identified the right UX factors to influence the business objective, we now have a set of criteria to guide development sprints. For example, one of the UX factors above is ‘The ability to easily search for houses by price and location’. This helps the design team focus on what’s important.
I’ll admit, at the moment, this attribute isn’t very well defined. For example, what does ‘easily search’ mean? We certainly need to firm this up. But right now, it’s still clear enough to help us understand where the product priorities lie. The priority is making it easy to search for houses by price and location, not (say) creating a customised PDF download of the house details. This leads to specific ideas for UX activities: for example, we’ll run a usability test to find where the roadblocks are with the house search process. This will provide us with a list of usability bugs that we need to fix to improve the search experience.
4. Measure the benchmark state of each UX factor
In this stage, we’ll derive some values we can use to assess current performance. Tables 1 and 2 show some (simulated) data from a test that we can use to make the point. (Note that these tables use four participants, simply to make the table easier to read. In an actual benchmark test, you’ll want to use a larger sample size: typically, around 20 users).
|User||Task 1||Task 2||Task 3||Task 4|
Each cell shows whether a user passed or failed the task.
For example, let’s say that Task 1 is our ‘Search for houses by price and location’ task. We can see that only 50% of people complete this task successfully. We can now set a realistic target for improvement — for example, in the next release, we might expect 75% of users to be able to complete this task.
There are other measures we can take from this table: for example, Task 1 looks like it needs a bit of work, whereas Task 2 is doing well. We could also calculate the overall task completion score by dividing 12 (the number of successful task completions) by 16 (the total number of tasks) and get a percentage of 75%.
We can do a similar analysis of task times.
|User||Task 1||Task 2||Task 3||Task 4|
Each cell shows the time taken to complete the task. (Some cells are blank because those users never completed the task).
Again, let’s assume that Task 1 is our ‘Search for houses by price and location’ task. We can see that searching for a house takes an average of 302s or about 5mins. We can now set a realistic target for improving this — for example, let’s see if we can speed up the search process so it’s completed in 3 mins.
(Note: Simply reducing time on task is one way of tracking design progress. Another way of expressing time on task improvements is by comparing the time taken by test participants with the time taken by trained, expert users. This is one way of being able to express time on task improvements as a percentage.)
Now, in this pre-design phase, we have some benchmark values in place to know if we’re moving forwards or backwards with our designs.
5. Track changes in each UX factor until target values are achieved
At this point, we can start testing our assumptions. We create new versions of the web site and test each one with users. At each stage, we see if we’re getting closer to our target value or not.
To track progress, we can apply a traffic light system: for example, a red light if we’re well below the target; a yellow light if we’re close to the target; and a green light if we’re on target.
The example below shows an example of this kind of benchmark report we did a while back for a web site.
The figure shows an example of a benchmark usability report for a web site. The report includes the three key measures of usability — effectiveness, efficiency and satisfaction — and combines these into an overall benchmark score to track progress.
6. Test if the business objective is being met
Ultimately, we need to tie our testing back to the business objective. Although it may seem obvious that we’ve improved the web site because we’ve improved performance on the UX factors, this is still a circle that needs closing. And the best way to close the circle is to return to where we started: we’ll run an A/B test.
In our example, Version A is our original release. Version B is our version with a UX improvement — for example, we’ve improved the ability to easily search for houses by price and location. It’s important that Version B doesn’t also include lots of new features or other major changes, because if it does, we’ll never know if the improvement was due to improving the search process or because of some other factor.
The best way to do this is to make the change quickly, release it and then test to see if the change improved the business objective. This may mean you make changes to the software and run tests on a weekly or even a daily basis (depending on the traffic you get).
Data driven design
When I speak with clients these days, I can see that most of them value design. At the very least, design is usually a step in their development process and not just something that happens on a whiteboard over a coffee.
But this doesn’t mean clients will value your design aesthetic or your opinion on what makes good or bad user interfaces. Probe deeper and you’ll see that the value of good design lies in the business benefits: good design creates more successful products.
Try this 6-step method to prove the business benefit of your own UX work.
About the author
Dr. David Travis (@userfocus on Twitter) holds a BSc and a PhD in Psychology and he is a Chartered Psychologist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, you'll love his online user experience training course.
Love it? Hate it? Join the discussioncomments powered by Disqus
Online training in user experience
Get a job in UX and improve your web site's UI design with these in-depth, hands-on user experience training courses. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
- Feb 2: Field visits and user interviews: 7 frequently asked questions
- Jan 6: 20 things you can do this year to improve your user's experience
- Dec 1: The 7 Deadly Sins of User Research
- Nov 3: 60 ways to understand user needs that aren't focus groups or surveys
- Oct 6: Conducting an effective stakeholder interview
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 11 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 8 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 3 articles tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 7 articles tagged iterative design
- 3 articles tagged layout
- 1 article tagged legal
- 10 articles tagged metrics
- 3 articles tagged mobile
- 6 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 7 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 14 articles tagged selling usability
- 12 articles tagged standards
- 36 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 20 articles tagged tools
- 45 articles tagged usability testing
- 3 articles tagged user manual