As the rate of mobile customer engagement increases, staying on top of effective mobile research techniques becomes more important. One way we stay on top is by asking our research panel participants what works well and what does not. For instance, we recently conducted an internal study with our panel to get feedback about certain question types and how well they translate to mobile and tablet devices. For each question type, we asked if the question was easy to answer, if it displayed well, and we offered a text box for any comments they had. We learned many things that will help us develop even better surveys in the future, and we’d like to share four of the most valuable takeaways from this research with you.
Verify mobile users are able to magnify question types. Sometimes people need a question magnified. Beyond the obvious suggestions to use easily-readable and large fonts, test your platform’s ability to resize (zoom-in and zoom-out) on various mobile and tablet devices. Some questions might, while others might not, and the results may surprise you.
Be cognizant of pop-up keyboards. Just because a question looks great during a mobile survey preview doesn’t necessarily mean the question will work great. A minority of current mobile and tablet devices utilize physical keyboards, opting instead for pop-up, touchscreen keyboards when manual input is necessary. This could spell trouble for short, open-ended responses or manual-input questions like street addresses, which not only require a pop-up keyboard to be used, but also for respondents to switch between alpha and numeric keyboards mid-answer.
Ask yourself if you really need to ask that rank-order question. Rank-order questions are rarely fun or easy on desktops, and they are often even less so on smartphones and tablets. Assume there exists a survey item where respondents need to rank, 1 through 7, their preferences for seven restaurants. If the respondents need to type a rank next to each restaurant, that means multiple pop-up keyboard usages on the same question—and still more if respondents want to change their answers partway through the series or make basic human mistakes. Drag-and-drop rankings might work better, assuming you’ve tested the item on a variety of platforms to ensure its functionality. Another solution would be to have the respondent select his/her favorite of the seven, then least favorite of the remaining six, then favorite of the remaining five, then least favorite of the remaining three, and so on. This, though, creates seven questions where previously there was only one, and researchers should avoid extending surveys—especially surveys they know people intend to take on mobile devices—whenever possible. Again, the best solution might be to re-evaluate if a full 1-to-7 ranking is needed; perhaps there’s a better way to ask the question. Maybe you only need their top three choices.
Beware the sliders! Although our own tests on a relatively few types of smartphones and tablets seemed to prove that slider-based items worked well for online surveys, we received numerous complaints from the many smartphone users in our study. Due to swipe-based motions utilized to perform a variety of functions and tasks on mobile apps and devices, several users reported that attempting to move a slider out of its neutral starting position caused them to leave the survey, reposition the browser window, open a new window and various other phenomena. When designing a survey, it’s important to ask yourself if a slider-based item adds any benefit over a traditional radio-button Likert scale.
And there you have it: four simple things to keep in mind when optimizing surveys for cross-platform usage. We’ve already found them to be immensely helpful to our panel and our clients, and we look forward to continuing to improve our mobile research experience.
For more information about your market research options, contact Vernon Research Group at (319) 364-7278 or subscribe to our blog.