An online survey is one such tool. When used correctly, an online survey can be a powerful way of quickly and inexpensively gathering information for precisely targeted markets, across wide geographic areas. When used incorrectly, an online survey can provide a gobbledegook of misleading and inconclusive data.
Some common practices in survey design often result in poorly designed surveys. Some examples: a survey asks respondents to rank 15 different factors in terms of the order of importance, and only a limited number of these factors are actually relevant. Other surveys err in the opposite direction and ask a respondent to rate a complicated product on a simple scale of 1-10 without exploring the factors that underlie that ranking. And perhaps the most annoying of all, there are surveys that take respondents to narrative dead-ends: you're forced to choose between a limited number of choices that don't reflect your views at all in order to continue with the survey.
The main problem in most of these case are that surveys are designed more from the point of view of the people developing the survey and analyzing the results rather than that of the respondent taking it. Yet, if the goal is to gain genuine insight about one's customers and clients, taking into account user experience is essential. UX is just as important in an online survey as it is with any other product or service.
So how do you make online surveys work? If you follow these ten rules you should be off to a great start. An online survey should be:
- Hypothesis-driven: questions are tailored to individual hypothesis and structured in such a way that they can confirm or deny those hypotheses. This is the most effective way to make the results of a survey actionable.
- Compelling and engaging: a survey should make respondents feel they are involved in a meaningful process. This leads to better, more thoughtful answers. Note that this is different than merely making a survey entertaining: when respondents say they enjoyed taking one of our surveys, it's usually not because of fancy flash videos or brightly-colored buttons, but rather because they felt that the survey gave them adequate opportunity to express their views, and that they might have even learned something from the process.
- MECE: whenever possible answer choices should be mutually exclusive and collectively exhaustive, and when not possible, adequate opportunities should be given for respondents to introduce new categories or options.
- Analytically-designed: important questions and issues should be broken down into their constituent parts, and each part evaluated independently of the others. This leads to a deeper understanding of what motivates each respondent and makes survey results more actionable. This is especially important in the innovation space where respondents are asked to evaluate products they've never encountered before.
- With good narrative flow: a survey is a journey through time, and each question should follow naturally from the previous one. Narrative structure is essential to how human beings understand meaning and context, so changes in the order of questions can have dramatic effects on how those questions are answered.
- Rigidly-structured: although opportunities should be offered for respondents to describe things in their own words, such questions often lead to inconsistent or incomplete responses on an online survey. Unlike other research tools where an open-ended approach is essential, an online survey is a very unforgiving environment and question answers should be designed and formalized as much as possible beforehand.
- Carefully screened: almost any online survey will include a number of people who try to game the system and give false answers for the sake of an extra incentive. Strategies need to be employed to prevent them from succeeding.
- Sent to an appropriately stratified sample: a sound survey relies on more than having the right number of responses. One also has to ensure that the sample accurately reflects all the relevant subsections of the population. In some cases, such as B2B surveys in which a limited number of companies dominate a market, this may be far more important than statistical sample size.
- Aimed at the right degree of statistical significance: not all surveys can be sent to a sample size that meets the highest level of statistical certainty of the social sciences. Not only is this financially unfeasible in some B2B markets, but it may not even be possible in others. In these cases, an iterative research approach using smaller sample sizes can often lead to more reliable conclusions.
- Have adequate opportunities for follow-up: one of the greatest outcomes of an online survey is to discover some unexpected result that challenges a previously held assumption. If time is not given to reconnect with respondents then this result, and the opportunities it may represent, may never be properly understood.
A survey is a very unforgiving medium, and except for the most simplistic surveys, online surveys need to be designed with adequate care and attention. When we say, "We don't do boring research projects"™ at Justkul Inc., we mean that we take the time and effort to follow these principles and thereby ensure every one of our online surveys is both successful and informative.
What do you think? Do you have stories of badly-designed surveys? Have we left something important out? We look forward to your comments!
By @jfhannon, CEO at Justkul Inc., a research firm focused on the needs of strategy and private equity.