So much of research depends on the questions that are asked, and nowhere is this more true than with questionnaires. These documents can serve to structure research that pulls out deep insight from the respondents, or they can artificially bias and taint the results, based on how they are designed.
For this post we will be talking about delivery via online surveys, but questionnaires can also be the drivers for phone, paper, or in-person surveys as well as the basis for focus groups or user experience interviews (though they are typically called discussion guides in these situations).
Anatomy of a questionnaire
For many surveys, where the respondents are from a broad base, the first questions are actually acting like a screener. For example, if you are looking for insights about how consumers use smartphones to watch video you can eliminate anyone who does not own a smartphone quickly. This number still has value, but you do not want those respondents cluttering up your survey results with answers. It seems obvious, but it happens though perhaps not as obvious as the previous example.
The first part of the survey is set up to ease the respondent in to answering and should have the easy questions that do not require memory or much thought.
Ensure that the in-depth, sensitive questions are at the end. If someone feels uncomfortable filling out a sensitive answer, say household income, then you already have their topical insights and it doesn’t matter as much if they end the survey early.
Broad to specific questions
Again, we use the broad to specific pattern because if we ask specific questions too early they may not be relevant and also because they can taint the answers we get. Likewise, questions in the same theme need to be grouped together to allow the respondents to experience a type of “flow” when answering. If a disjointed question interrupts the flow that can disrupt the user and actually influence the responses.
Rankings and coded answers
Most questions on online surveys are “coded” meaning that the user is selecting items from a list or scale. The reason is fairly obvious, it’s orders of magnitude easier to analyze this type of answer compared to open text. This means that the selection of the question types is critical.
When selecting answers on a scale, for example, is important and will depend on what you are looking for and where your survey is held. There are cultural differences as well. For example, surveys in Latin America tend to cluster high on the range, say 9 out of 10. Compare this to Canada where answers cluster closer to 7 out of 10 so the region strongly influences how the results are interpreted.
Also, the number of options makes a difference. Too many answers to a question makes it impossible for users to really compare them effectively. Use the heuristic of 5 answers (plus or minus) for a question. Readers with history with user experience work may be reminded of the rule 7+/- 2 for items in menus. This rule is certainly related and it is on the lower end because people answering surveys are often not that engaged and have low or non-existent payoff.
There is no fixed rule for the number of selections for ranges (I prefer the Likert scale of 1-7 for example). As long as your survey uses a consistent range your respondents will learn the range and give you internally consistent answers. This is where measurement of the difference between a single user’s answers is more interesting during analysis than the absolute number.
When designing your questionnaires follow these guidelines and then test, test, test. That’s the way to get the best feel for how to ask relevant questions in a neutral way that elicits the best results.
This post is part of a series where I discuss materials from a night course on analytics. The first post was Marketing research and analytics – ask the right question. Stay tuned for more as the course progresses.