Designing effective web surveys is really about following a process with 6 steps:

  1. Formulate your research question.
  2. Identify your population and sample.
  3. Design the questionniare.
  4. Pilot test the questionniare.
  5. Collect the data.
  6. Analyse the data.

Problems occur with surveys when people skip one of these steps.

The 6 steps to creating great surveys

Click the image for a larger view.

Formulate your research question

Surveys provide information to solve problems. So before you begin your survey you need to be able to confidently articulate the specific problem that you’re trying to solve.

A typical problem I’ll hear from people is, “I want to know what people think about our new product.” Although laudable, this is too vague a problem to design a survey around. What specific problem are you trying to solve and what new information do you need to solve it?

Further questioning might reveal that the product is suffering from an unduly high rate of returns. So the problem might be better articulated as, “Why do people return our product shortly after buying it?” This helps us realise that we need to create questions that identify people’s initial expectations about the product and how the product falls short of these expectations — questions we might have missed if we had stayed with the vaguely articulated problem.

A clearly stated problem statement also helps us distinguish between information we must gather and information that is ‘nice to have’ — which means we can also use it to keep our survey short.

Identify your population and sample

All surveys are susceptible to error. Most people are aware of ‘sampling error’, the 'plus or minus' figure quoted with opinion polls. When we take a sample, we use a selection of people and hope that their views are representative of the whole population that we are interested in. We can use statistics to quantify the amount of sampling error in a survey but these statistics are valid only if the sample you have taken is truly random.

This is rarely the case, for two reasons. First, the research method you have chosen may exclude certain people (so-called instrument error). For example, using a web survey will exclude people who don’t have access to the web.

The second reason is that non-respondents are often different from respondents in a way that matters to the outcome of the study (so called nonresponse error). For example, imagine we devised a survey to measure people’s experience with the Internet. Imagine further that we send out the survey invitation by e-mail. It might be the case that novice users of the Internet are much more reluctant to click on a link in an e-mail message, thinking that messages with links are fraudulent.

Non-response error is a serious source of error with web surveys. This is because researchers tend to send their survey to everyone as it’s easy to do so.

For example, you may send the survey to 10,000 people on your mailing list and find that 1,000 respond. Although the sampling error will be small, the large non-response error is a serious source of bias. This is because those people who responded may not be representative of the total population — they may like your company more and so be more disposed to take the survey. In this example, the survey respondents are different from nonrespondents in a way that will affect the survey results.

In this example, it would be better to randomly sample 500 people from the 10,000 and aim for a 75% response rate (375). This is because a 75% response rate from a randomly selected sample is better than a 10% response rate from everyone. Remember that the key is to select from your population randomly. Whenever your response rate is less than 60%, you should be on the look out for non-response error.

Design the questionnaire

The survey itself may also be a source of bias. For more on crafting good survey questions, try 20 tips for writing web surveys.

Here are some common errors I’ve seen in survey questions:

  • Using unbalanced response scales.
  • Using response categories that overlap.
  • Asking vague questions.
  • Asking leading questions.
  • Asking nosy questions.
  • Using jargon or abbreviations.
  • Assuming people know enough to answer.
  • Asking people questions that require too much thought.
  • Asking double-barrelled questions.

Pilot test the questionnaire

A pilot test provides a way of finding problems with the survey before you invest in the cost of collecting data. You should never send out a survey without pilot testing it first.

Pilot testing is best done in two phases: in the first phase, you talk with the people who will use the survey results — the stakeholders. Because they have practical knowledge about the kind of data that are being collected, they can spot technical problems that you might miss.

You conduct the second phase of the pilot test with a sample of respondents. It is important to watch people fill out questionnaires in person rather than simply emailing them a link. That way, you can watch for signs that people are puzzled, check their understanding of certain questions, and see if they misinterpret instructions.

Collect the data

Once you’ve got this far in the process, all you should need to do is write an engaging invitation to get people to respond.

Assuming that you send out an email invitation, make sure that you include a relevant subject line and a recognisable email sender name so your invitation doesn’t end up in people's junk mail folder. You’ll also increase the response rate if you describe the incentive, personalise the invitation and make it urgent (‘survey closes in 7 days’).

Two weeks is long enough to keep most surveys open as evidence shows that over half of survey responses arrive in the first day, with 7 out of 8 responses within the first week (PDF).

Analyse the data

You’ll use two types of statistics in your analysis:

  • Descriptive statistics: Summarises what’s going on in your data
  • Inferential statistics: Helps you make judgements of the probability that an observed difference between groups is a dependable one or one that might have happened by chance.

Most of the online survey tools, like SurveyMonkey, make it straightforward to calculate descriptive statistics for your survey and will even create graphs for you. To carry out inferential statistics, you’ll need to export the raw data and do some number crunching in a program like SPSS.

About the author

David Travis

Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher. If you like his articles, you might enjoy his free online user experience course.



Foundation Certificate in UX

Gain hands-on practice in all the key areas of UX while you prepare for the BCS Foundation Certificate in User Experience. More details

Download the best of Userfocus. For free.

100s of pages of practical advice on user experience, in handy portable form. 'Bright Ideas' eBooks.

Related articles & resources

This article is tagged questionnaires, survey design.


Our services

Let us help you create digital products that are simple and enjoyable to use.

Training courses

Join our community of UX professionals who get their user experience training from Userfocus. See our curriculum.

David Travis Dr. David Travis (@userfocus) has been carrying out ethnographic field research and running product usability tests since 1989. He has published three books on user experience including Think Like a UX Researcher.

Get help with…

If you liked this, try…

Get our newsletter (And a free guide to usability test moderation)
No thanks