Speeding up the Survey Process: A Guide to Minimizing Programming Timelines

Speeding up the Survey Process: A Guide to Minimizing Programming Timelines

In our last post, Speeding up the Survey Process: A Guide to Shortening Timelines Without Jeopardizing Qualitywe provided several metrics on typical survey timelines as well as advice on how to speed up each stage of the process:

Rather than cut down on fielding time, it’s important to make each stage of the process as efficient as possible.

In this post, we will focus on the programming stage. Programming is arguably the most complex part of many survey projects and is often the most delayed. We have seen surveys sit and stall in programming for weeks, even on projects that are pressed for time.

When all runs smoothly, a simple survey may take less than a day to program, while a questionnaire with heavy logic (e.g., skip, display, loop & merge, etc.) can take two or three days and a very complex survey may take even longer.

However, in our experience, there are three main reasons why programming gets derailed. Here are the three biggest time drains on programming—and what to do about them. 

Unclear programming direction within the questionnaire

While many research providers will program exactly what is written, we take time and care to consider the client’s intention. For example, if the question stem does not state “please select all that apply” and no programming direction indicates such a question type, we will still program a multi-select question if the answer choices make the most sense that way.

However, reducing potential ambiguity is an easy way to increase programming speed, as it cuts down on clarification edits. Common issues we must often clarify include:

  • Unclear programming logic, such as whether a question should be displayed or not based on prior responses
  • Confusing flow of questions by theme (e.g., asking about attitudes toward certain vendors, then jumping to another topic, then asking more vendor questions). Surveys move in a linear direction.
  • Reconciling a question with its answers (e.g., using the word “rank” in the question stem and then providing options to “rate”)

Takeaway: Eliminating “guesswork” on programmer’s end saves a lot of back-and-forth.

Updates to the questionnaire after programming has started 

While it may seem that programming an “80% version” of the questionnaire is an efficient way to jumpstart the process, we have found this often slows down the delivery. Why? Imagine building a new house. The new homeowners are eager to start work and provide the contractor with a mostly-final version of the plans for a one-story house. The builders finish the bulk of the work but then receive the revised plans, which include a second story. Unfortunately, this calls for an overhaul that includes knocking down walls, rewiring, reconnecting pipes, etc. In this scenario, like with a survey, the project ends up taking more time overall than it would have if it had been delayed until the final plans were set. 

There is a lot of programmed logic that powers complex surveys. Logic may be tied to an individual option, an overall question, or even a block (a group of questions). Making even a small change to one area of the survey can have significant downstream effects on how the survey works.

Takeaway: Refine and finalize questionnaire’s design first, then move to programming. Not the other way around.

An inefficient testing and editing process

When we edit a survey with a client, we test for:

  • Content – Is it all there? Is it portrayed the way the client would like? Does it follow survey best practices?
  • Logic – Does it make sense intuitively? Does it work correctly?
  • Aesthetic – Is the survey easy to read and navigate for the respondent or the moderator?

The survey is tested internally before we send it to the client. Despite this, edits are common, and the process is often an iterative one that requires detailed communication. It’s common even for a simple, finalized survey to undergo a few changes after programming, as coding the survey into the online platform often shines a new light on the questionnaire.

Speed and flow of editing depend on how communicative and detailed each party can be. Slow responses, unclear direction, or contradictory instructions can slow things down. Thus, a standard operating procedure can be an easy way to expedite testing and editing. Rather than communicating edits piecemeal via email, for example, we can use tracked changes in a Word document or running change log in Excel to check off each task and make comments.

Takeaway: Align all stakeholders in advance of programming and devise a thoughtful workflow for communicating edits.


Programming trade-offs are a double-edged sword. Trying to perfect the questionnaire can cut into precious fielding time. On the other hand, speeding into the fielding phase without a well thought out questionnaire can prove detrimental during the data collection phase, thereby slowing down the overall timeline. If you have a hard deadline, carefully weigh the incremental benefits of perfecting the questionnaire with the negative consequences of reducing fielding timelines.