Four Ways to Boost Survey Completion Rates

Four Factors that Most Impact Online Survey Completion Rates

survey completion

Survey fatigue is a real thing.  Marketers and researchers operating in the B2B world and niche consumer markets are often challenged to reach their sample goals.  Maximizing completion rates (number of participants who pass through the qualifying screening questions and then complete the survey) is one way to overcome this challenge.  That is why our lead analyst, Dr. Lori Dockery, investigated survey projects from four years to determine the impact of various factors on completion rate.  Her full article and results were published in the July 2017 edition of Quirks and can be accessed by clicking on this link to the July edition: Quirks Article

I am sharing the top-line findings for a more general audience.  What did Lori find in her research?  As you might expect, she learned that completion time (calculated by summing up all the individual screen times to remove error from people who may have stopped then restarted later) accounted for over 17% of the variance in the completion rate.  When she examined the impact of completion time on only surveys fielded with our opt-in Iowa Opinion Panel, completion time accounted for 26% of the variance.  At 14 minutes into the survey, completion rates of those who had passed the screener questions dropped below 90%.  The relationship is clear that, even with opt-in panels or lists, longer surveys will have lower completion rates.

So, we confirmed the intuitive obvious.  What else did Lori learn from her analysis?  Lori examined 16 variables, from the sample size to whether participants were professionals or general consumers.

Pearson Correlation p-value
Professionals vs Consumers 0.097 0.332
Local vs National vs International -0.240 0.015
Incentive Was IOP Points 0.281 0.004
Incentive Was Drawing -0.137 0.171
Incentive Was Gift Card -0.029 0.773
Incentive Was Report -0.365 0.000
Incentive Value -0.354 0.000
Survey Included Kano -0.149 0.134
Survey Included Conjoint -0.134 0.180
Number of Ads 0.127 0.203
Number of Numeric Boxes -0.378 0.000
Number of Open-ended Boxes -0.167 0.094
Client Masking 0.089 0.373
Familiarity with Client -0.099 0.325
Median Time to Complete -0.419 0.000
Completes Needed -0.080 0.029

[Note: The p-value is the probability that the correlation is different from a correlation of zero (no relationship). The closer the p-value is to zero, the better you can feel that this correlation is different from zero and the relationship you see is non-random. The closer the p-value is to 1, the more likely the correlation you see has been produced randomly. There may just be too much noise in the data or there really may be no relationship between the variables. The cut-off for p-values that we usually use is 0.05. If the p-value is lower, we trust the correlation we see. If the p-value is higher, we consider the correlation/relationship between variables to be non-existent.]

Requiring more work from our participants reduced survey completion rates.

Another factor discovered to have a negative impact on completion rates was the inclusion of complicated, time-consuming exercises such as Kano questions or a conjoint choice exercise.  The number of open-ended boxes or numeric boxes (participants have to type in an answer) within a survey had the same effect. These all make sense as well, since they require more concentration and effort by the survey participant and are more time-consuming than selecting a response from a list or scale.  Of course, this finding does not mean you should not use these survey tools to gain qualitative responses or to enable advanced analysis of the results.  However, you should keep their use in any one survey to a minimum.  If you have a Kano exercise, do not also ask participants to answer five open-ended questions.  If the output of a conjoint exercise is key to your research goals, do not also ask ten questions about brand.

Higher incentives did not increase completion rates.

cash incentivesAs for incentives, Lori’s research showed that panel points or gift cards/cash worked best, while a prize drawing or a copy of the survey findings had a negative correlation with completion rates. In our B2B studies when you might expect receiving the survey results to be attractive to participants, it still did not improve completion rates.

The value of the incentive was barely correlated with completion rates. Most likely, it has the most impact on whether or not a participant decides to click into the survey initially or not.  Its effect fades after the participant has made the commitment to engage.

Lori’s examination also revealed that the more respondents we needed to complete the study, the more likely we were to see people drop out of the survey.  This might be because larger sample sizes often require the use of more than one sampling source or it might be that by the time you get to a larger sample size, you have exhausted the “eager” survey takers and respondents may be, in general, more reluctant to complete a full survey.

Two factors that you might expect to have a positive correlation with participants completing surveys, did not.

masked surveyClient masking was one of the factors, and the second was the number of ads included in surveys testing creative assets.  We expected that more participants would complete when they knew who the survey was for, but Lori did not find an effect from masking or unmasking.  It could still be affecting the likelihood of starting the survey or responses to the question, though.  You should make the decision of whether or not to mask based on other factors and not worry about how it will impact completion rates.

With regard to the number of ads in a survey, we expected that more ads might have a negative correlation with completion rates as it adds to the work required, but there was zero correlation. It may be the participants enjoy reviewing ads, so unless the number gets too burdensome, it keeps them engaged in the survey.  We had fewer studies with ads, too, so the correlation might change if we had a larger sample.

While we will continue to monitor our studies’ completion rates and are also looking at factors that affect the rate clicking into a survey, we can recommend the following as best strategies for ensuring higher completion rates:

  • Keep surveys under 14 minutes
  • Minimize the use of open-ended questions and, whenever possible, turn those questions into fixed-response lists or a scale
  • If using more complex or repetitive exercises within a survey, such as a Kano or a conjoint exercise, keep it to one exercise and reduce any additional survey questions to as few as possible
  • Offer cash/gift card incentives for non-panel surveys
  • Customer panel points if the survey is going to a panel

Remember to follow smart survey design practices, too, such as engaging and respecting your participants, using common language, making sure your questions are clear and unambiguous, avoiding bias in how a question is asked, and asking only relevant questions.

Looking for more participation from your customers or prospects in your research studies?  Contact Linda Kuster at lkuster@vernonresearch.com or use our website contact form.