PAR Surveys – How do we design actionable and unbiased surveys?



We begin survey development by asking that our client’s key stakeholders write down the top few things they really need to learn as a result of the research. That helps focus the survey content. If you are conducting a customer satisfaction tracking study we begin with an appropriate content template – then blend the key concerns your stakeholders identify. We work with the questionnaire outline to be sure your survey unfolds logically and that appropriate skip patterns capture information you require without wasting respondents’ time. Once broad content flow is established, we look closely at question wording to be sure we are asking the best questions in the best possible way. For example, if you want to learn about future purchase intent for some fast-moving food or beverage, you could ask:

  • “How often do you think you might buy (brand) chips in the next month?” – OR…
  • “If (brand) chips were available at the store you normally shop and for a price you normally pay, how likely do you think you might buy (brand) chips in the next month?”
    …If we choose to talk about the future (a hypothetical forecast of behavior), we might get better information simply by putting some tighter reference points around the key question.

    The alternative to the hypothetical response above is to talk about actual past purchase history. Even then, slightly different questions can still yield different answers. For instance, should we ask…? 
  •  “How often have you purchased (brand) chips in the past month?” - OR…
  • “How often do you buy (brand) chips in a typical month?” - OR…
  • “Thinking about the brand of chips in this picture, how often have you purchased (brand) chips in the past month?”
    …In the first two versions of this question, can we necessarily say that purchases in “a typical month” are the same as in the “past month?” (What about seasonality, known replenishment cycles, holidays, etc.?) The choice of wording needs to be driven by what you know about the audience and situation – or perhaps we need to imbed a bit of “research on research” in the survey design (something we do all the time!). The third scenario improves the quality of response simply by offering a picture to help assure that the respondent is thinking about the right brand of chips. We have found that when pantry checks are done, respondents are often wrong about the specific variety of a product they claim to buy – so photos really do help!

    We always consider the list of usual survey question problems that need to be checked and cleaned up. Among these, (a) questions with assumed knowledge, (b) items that use indecipherable industry jargon or abbreviations, (c) double-barreled questions (two things asked in one question), (d) inadequate response options (i.e., Should there be a don’t know?), (e) confusing document spacing/layout (i.e., answer blanks that don’t easily correspond to the question being asked) and (f) language complexity that exceeds the respondents’ ability.

    There is an element of art that goes with final layout of a survey. Are answer options on the document visually distinct? Is it in a font size that works for the audience (perhaps seniors)? If the survey is to be machine read, has layout been tested in an optical character reader? If it is a CATI (computer assisted telephone interview) survey, do the screens advance as you envisioned when writing the draft?

    As you approach the end of survey design, ask yourself if the length really will work for your respondents. (A great test is to ask your boss to complete your survey you are about to launch!) We have seen a disturbing trend toward lengthy surveys in recent years. Budget cuts have driven some clients to try to include all the questions they can think of when a survey is done. There is ample industry evidence that long surveys reduce current project response rates, increase mid-point terminations, and reduce future willingness to help with surveys. We encourage you to ask only questions that will prove actionable and drop any “nice to know” survey questions.

    One last thing… At PAR we will run a pilot test of your survey. Every survey. Every time. That quality management step reduces surprises and controls budgets. One more PAR standard that helps to deliver information you can trust!