Designing effective web surveys is really about following a process with 6 steps:
- Formulate your research question.
- Identify your population and sample.
- Design the questionniare.
- Pilot test the questionniare.
- Collect the data.
- Analyse the data.
Problems occur with surveys when people skip one of these steps.
Formulate your research question
Surveys provide information to solve problems. So before you begin your survey you need to be able to confidently articulate the specific problem that you’re trying to solve.
A typical problem I’ll hear from people is, “I want to know what people think about our new product.” Although laudable, this is too vague a problem to design a survey around. What specific problem are you trying to solve and what new information do you need to solve it?
Further questioning might reveal that the product is suffering from an unduly high rate of returns. So the problem might be better articulated as, “Why do people return our product shortly after buying it?” This helps us realise that we need to create questions that identify people’s initial expectations about the product and how the product falls short of these expectations — questions we might have missed if we had stayed with the vaguely articulated problem.
A clearly stated problem statement also helps us distinguish between information we must gather and information that is ‘nice to have’ — which means we can also use it to keep our survey short.
Identify your population and sample
All surveys are susceptible to error. Most people are aware of ‘sampling error’, the 'plus or minus' figure quoted with opinion polls. When we take a sample, we use a selection of people and hope that their views are representative of the whole population that we are interested in. We can use statistics to quantify the amount of sampling error in a survey but these statistics are valid only if the sample you have taken is truly random.
This is rarely the case, for two reasons. First, the research method you have chosen may exclude certain people (so-called instrument error). For example, using a web survey will exclude people who don’t have access to the web.
The second reason is that non-respondents are often different from respondents in a way that matters to the outcome of the study (so called nonresponse error). For example, imagine we devised a survey to measure people’s experience with the Internet. Imagine further that we send out the survey invitation by e-mail. It might be the case that novice users of the Internet are much more reluctant to click on a link in an e-mail message, thinking that messages with links are fraudulent.
Non-response error is a serious source of error with web surveys. This is because researchers tend to send their survey to everyone as it’s easy to do so.
For example, you may send the survey to 10,000 people on your mailing list and find that 1,000 respond. Although the sampling error will be small, the large non-response error is a serious source of bias. This is because those people who responded may not be representative of the total population — they may like your company more and so be more disposed to take the survey. In this example, the survey respondents are different from nonrespondents in a way that will affect the survey results.
In this example, it would be better to randomly sample 500 people from the 10,000 and aim for a 75% response rate (375). This is because a 75% response rate from a randomly selected sample is better than a 10% response rate from everyone. Remember that the key is to select from your population randomly. Whenever your response rate is less than 60%, you should be on the look out for non-response error.
Design the questionnaire
The survey itself may also be a source of bias. For more on crafting good survey questions, try 20 tips for writing web surveys.
Here are some common errors I’ve seen in survey questions:
- Using unbalanced response scales.
- Using response categories that overlap.
- Asking vague questions.
- Asking leading questions.
- Asking nosy questions.
- Using jargon or abbreviations.
- Assuming people know enough to answer.
- Asking people questions that require too much thought.
- Asking double-barrelled questions.
Pilot test the questionnaire
A pilot test provides a way of finding problems with the survey before you invest in the cost of collecting data. You should never send out a survey without pilot testing it first.
Pilot testing is best done in two phases: in the first phase, you talk with the people who will use the survey results — the stakeholders. Because they have practical knowledge about the kind of data that are being collected, they can spot technical problems that you might miss.
You conduct the second phase of the pilot test with a sample of respondents. It is important to watch people fill out questionnaires in person rather than simply emailing them a link. That way, you can watch for signs that people are puzzled, check their understanding of certain questions, and see if they misinterpret instructions.
Collect the data
Once you’ve got this far in the process, all you should need to do is write an engaging invitation to get people to respond.
Assuming that you send out an email invitation, make sure that you include a relevant subject line and a recognisable email sender name so your invitation doesn’t end up in people's junk mail folder. You’ll also increase the response rate if you describe the incentive, personalise the invitation and make it urgent (‘survey closes in 7 days’).
Two weeks is long enough to keep most surveys open as evidence shows that over half of survey responses arrive in the first day, with 7 out of 8 responses within the first week (PDF).
Analyse the data
You’ll use two types of statistics in your analysis:
- Descriptive statistics: Summarises what’s going on in your data
- Inferential statistics: Helps you make judgements of the probability that an observed difference between groups is a dependable one or one that might have happened by chance.
Most of the online survey tools, like SurveyMonkey, make it straightforward to calculate descriptive statistics for your survey and will even create graphs for you. To carry out inferential statistics, you’ll need to export the raw data and do some number crunching in a program like SPSS.
About the author
Dr. David Travis (@userfocus on Twitter) holds a BSc and a PhD in Psychology and he is a Chartered Psychologist. He has worked in the fields of human factors, usability and user experience since 1989 and has published two books on usability. David helps both large firms and start ups connect with their customers and bring business ideas to market. If you like his articles, you'll love his online user experience training course.
Love it? Hate it? Join the discussioncomments powered by Disqus
Contextual inquiry: how to plan, execute and analyse a site visit
Oct 20, London: Learn how to get the most from a field visit to a customer location. More details
Every month, we share an in-depth article on user experience with over 10,000 newsletter readers. Want in? Sign up now and download a free guide to usability test moderation.
User Experience Articles
Our most popular articles
Our most commented articles
Our most recent articles
Search for articles by keyword
- 7 articles tagged accessibility
- 4 articles tagged axure
- 5 articles tagged benefits
- 11 articles tagged careers
- 8 articles tagged case study
- 1 article tagged css
- 8 articles tagged discount usability
- 2 articles tagged ecommerce
- 6 articles tagged ethnography
- 14 articles tagged expert review
- 1 article tagged fitts law
- 1 article tagged focus groups
- 1 article tagged forms
- 6 articles tagged guidelines
- 10 articles tagged heuristic evaluation
- 7 articles tagged ia
- 14 articles tagged iso 9241
- 7 articles tagged iterative design
- 3 articles tagged layout
- 1 article tagged legal
- 10 articles tagged metrics
- 3 articles tagged mobile
- 5 articles tagged moderating
- 3 articles tagged morae
- 2 articles tagged navigation
- 6 articles tagged personas
- 15 articles tagged prototyping
- 7 articles tagged questionnaires
- 1 article tagged quotations
- 4 articles tagged roi
- 14 articles tagged selling usability
- 12 articles tagged standards
- 35 articles tagged strategy
- 2 articles tagged style guide
- 4 articles tagged survey design
- 5 articles tagged task scenarios
- 2 articles tagged templates
- 19 articles tagged tools
- 43 articles tagged usability testing
- 3 articles tagged user manual