How to Select the Best Mode for a Survey: Target Sample Size

How to Select the Best Mode for a Survey: Target Sample Size

How to Select the Best Mode for a Survey

What is the Target Sample Size?

The desired number of survey completes is called the target sample size and is often referred to as the ‘n’ size on the project (e.g., a sample size of 200 respondents is the same thing as saying 200n.) The achievable number of completes is called feasibility. Now that we’ve explored how the target audience impacts mode selection in our previous post, we can take that knowledge on who the target population is and apply it to how many among the target population we are seeking. Therefore, this post doesn’t just look at sample size but rather considers three aspects in tandem:

           1. Population size
           2. Incidence rate
           3. Sample size 

To think about feasibility, it’s helpful to imagine a funnel. The population is at the mouth, and the sample size comes out of the bottom part of the funnel. As such, the inputs and outputs are relative. A “large” sample size of one audience may be a “small” sample size of another. 

There is a funnel effect for nearly every web survey because only a portion of the people we invite will end up completing the full survey. To plan how many invitations it will take to yield the desired sample size, we consider the response rate and incidence rate. 

Response Rate: The percent of invited participants who click into the survey. People’s interest levels, time commitments, and inbox clutter all vary, so we can’t assume everyone will respond to our invitation. 

Incidence Rate (also known as IR or Qualifying Rate/QR): The percent of respondents who make it through the screeners. This is just a measure of the number of people who qualify divided by the number of people who entered the survey. A high IR may be 70% or more, while a low IR is 30% or below. In our last post, we discussed qualifying rate and how “targetable criteria”, which varies by panel, impacts the incidence rate. 

Online surveys seem to make the most sense for large sample sizes. Certainly, an online mode is more efficient because it involves a reduced number of man-hours per outreach. It’s possible to send an email invitation to many hundreds or thousands of people at once, whereas a phone-based approach is comparatively more time consuming. If both online and phone modes are equally appropriate for an audience, the online option will often be more efficient and less expensive, which is one of the reasons web surveys have become increasingly popular. 

Web surveys are optimal for an audience that can be well-targeted using the panel’s pre-identified attributes resulting in more accurate IR estimates and more targeted sampling. 

Takeaway: Online outreach is very efficient for large n projects, so long as the audience is relatively “easy” to target. 

While web surveys may be effective for some projects with large sample sizes, an online mode may not work in all cases. To illustrate the effect of our three factors (population size, IR, and sample size,) we have created a visual representation of the ease or difficulty of online fielding. The darkest green indicates the easiest to field online, while the darkest red indicates the most difficult to field online. No numbers are included, as they would all be relative to each other. This is more of a heuristic than a calculator. 

To see how this might play out in practice, let’s assume that we want to get 200 completes. In this scenario, we have 10,000 people in a panel to whom we can send the survey. Figuring a 10% response rate, only 1,000 people will start the survey. Figuring a 30% incidence rate (IR) at the start of the survey, we should get 300 completes (30% * 1,000 = 300) and easily achieve our desired 200n. With that same scenario, a larger sample size of 400n would not be feasible. 

What if we are targeting a niche group with a 2% IR? (Remember, we usually don’t know the qualifying rate at the start of the project!) A very low IR of 2% would only yield ~20 responses, even though we started with a panel of 10,000 people. There are two issues with this low-yield approach that need to be considered when deciding on a mode. 

First, the obvious issue is insufficient sample size. Since an online panel contains a fixed number of people, once you “run out” of people there are no more options (unless you bring in another panel – but then you have to have a way to prevent duplicate responses, as one person may participate in multiple panels). Wanting 200n and only ending up with 20n is frustrating and often defeats the initial goals of commissioning the study. Unexpectedly, low feasibility is not the only issue, though. 

The second issue, often disregarded, is a potentially tarnished reputation due to frequent low-IR studies. In other words, if panelists continue to receive surveys that do not result in completes (i.e. a full payment to the participant), the overall “faith” in the survey process begins to decline among respondents, which can cause a larger ripple effect for future surveys. 

Imagine you receive a survey invitation. You take the survey and receive payment. The next time a survey comes around, you will likely want to participate again. However, imagine you receive a survey invitation and get screened out. Would you be likely to participate in the next one? Panelists may disengage if they try participating in multiple studies and are continuously terminated. Since a panel company’s main product consists of panelists, it is easy to see how this could be a big cost. Using the example above, since a large volume of people (9,980 in this example) received no value from the outreach, it would be a high price to pay for a low value to only get 20 completes in the survey. 

Nevertheless, such a limited target audience would be a good candidate for a CATI survey, as we can scale up our team of moderators to custom recruit survey participants using desk research, customer lists, proprietary databases, and referrals to access respondents well beyond those who are “opted in” to online panels. A phone-based approach can also lessen the friction of screening out unqualified participants, since targeting is of higher fidelity to start and panelists are interacted with personally. 

Takeaway: A large sample of “difficult” audiences may be better done via CATI.  

At Coleman, we realize that before launching the survey, our clients need assurances that the target sample size can be reached. We will always include our feasibility estimates in our survey proposals. So, if the numbers vary by the mode of the survey, we’ll make sure our clients know ahead of time and can make an informed decision. 

In the next post on our series we will discuss questionnaire design and the importance of aligning the design with the mode.