Does the order in which a survey respondent sees web question response options affect the response to the question? Such a question has often been probed in survey research, with tests typically finding a primacy effect. When taking surveys where they can see an entire set of questions (i.e. not phone surveys, for which recency effects come into play), respondents are biased towards selecting response options that appear earlier (see here for a review of past work on this). This satisficing behavior is problematic in that it breaks with the assumption of respondents considering an entire response option set when answering a question, and instead choosing earlier options and responses that are first reasonably acceptable to survey-takers. Notably, this could produce an inaccurate reflection of actual opinion if later, overlooked response options better capture the respondent’s opinion.
In a recent survey I conducted, I embedded a response order experiment to see whether such response order effects had been plaguing the student surveys I’ve been running. Specifically, on various questions about social/academic life perceptions and experiences during students’ freshman years, I assigned a random half of survey respondents to see a certain response order for a question, and the other half of survey-takers to see the reversed response order for that same question (note: “Not sure” options always appeared at the end of a set). I did this for 15 questions (one of which was actually a five-question grid), and checked to see if response percentages were statistically significantly different between the groups that saw different response orderings. In short, I did not find any significant response order effects. While there were differences in the expected direction (when responses appeared earlier in the set, they were chosen more often), none of these differences attained significance at the 0.05 level.
A few response order differences came close to significance though, which I want to briefly touch on. When students were asked how much they missed home during their freshman year, more chose “A little” when it appeared second in the response set (49 percent) than when it appeared third (37 percent). The small sample size for the experimental groups (roughly 180 weighted N in each) made for larger 95% confidence intervals and thus these groups are not significantly different here. Interestingly, the difference size does not appear as drastic for the other response options for this question.
When students were asked whether they have regrets about coming to Dartmouth, a noticeable–though not significant–divide appears by whether they see the response option of “Yes” or “No” first. When “Yes” appears first, 55 percent of students say they have regrets while 38 percent say they don’t have regrets. On the other hand, when “No” precedes “Yes” in the response set, 41 percent say they have regrets while 51 percent say they don’t have regrets. Again, conclusions from this are limited by sample size constraints, but it’s still notable that response tendencies swing in this manner, especially as it’s just a two-question response inversion (rather than four-question responses that are reversed in other cases).
I would need a larger sample to confirm, but it does that seem that for some questions–in this case, those that trended more sensitive than other questions in the survey–response ordering matters. For aggregate results that are always the ones reported, randomly reversing the response order would alleviate some of these concerns. At the same time, it’s worth keeping in mind that this only occurred for two of 15 questions (and in truth, 19, given the grid questions), so it’s not as serious a problem–besides the fact that none of the response order effects were significant anyways.