Survey mode effects–the idea that results from surveys differ based on whether the survey is conducted by a live person interviewer through a phone call or self-administered online–has been an oft-discussed topic during the 2016 election season. It first came up during the GOP primary and again gained attention during the general election season. Perhaps most importantly, it’s been frequently used to examine the potential for social desirability bias effects in estimating candidate support before the election. Comparing live phone polls–where this bias would materialize as a result of talking with another human and choosing to not reveal socially undesirable opinions–and online polls–an anonymous setting that does not create the same level of bias effects–could reveal the presence (if any) of this bias. This question of whether social desirability bias played a role in polling error and more specifically for underestimating Trump support is a lot more complicated than a simple mode comparison (see section 4 in my article here for a quick review of competing arguments). Nevertheless, it’s worth looking into mode effects such as these as a clear-cut way of getting at social desirability bias–and whether there was a “shy Trump” effect.
Patrick Egan did a good initial job of this at the Monkey Cage in tracking Hillary Clinton’s percentage support margin lead over Donald Trump from July until the election. He finds no major difference between live phone polls and online ones, and rightly concludes that no evidence of social desirability bias acting against Trump exits. One thing I wanted to check, however, is how these mode effects developed not just in terms of candidate support margin over the course of the campaign (i.e. Clinton % minus Trump %), but also in terms of each candidate’s individual support level. Were differences in mode greater in estimating one candidate’s share of support than another’s? That’s what I show in the below graph.
The plot largely follows Egan’s approach (in terms of time frame and using the three-way trial heat poll from HuffPost Pollster) except for replacing margin with individual support shares. For each candidate, there’s a solid green line that signifies internet/automated phone (IVR) polls and a dashed purple line that represents live phone polls that result from a Loess smoothing function generated in R (creating a trend line estimate over time).
As can be seen in the graph, there is almost no difference in how survey modes estimate Trump support over the course of the campaign. If there were to really be social desirability bias effects, then we would expect the green line–the one for online and IVR polls conducted without a live interviewer–to be consistently above the purple line. Instead, they track very closely together. The only times they split are before August and in early- to mid-September, when in fact Trump sees more support from live phone interviews. Crucially, the gap in survey mode estimates almost entirely disappears starting in early October until the election. Thus, no bias–at least from this simple survey mode comparison–can be detected for underestimating Trump support.
Interestingly, larger mode divides appear for estimates of Clinton support. At nearly every point in the campaign, Clinton receives more support in live phone interview polling than in online/IVR polling. Natalie Jackson at the HuffPost Pollster first discovered this phenomenon in late October, and ascribed it to live phone polls being more likely to nudge initially undecided respondents to choose a candidate–often being Clinton–than online/IVR polls do. The gap gradually shrinks over the course of the general election campaign. Just as in looking at Trump support only, the survey mode effect vanishes right before the election. Thus, no real mode effects–at least in the critical stage right before Election Day–can be found in this case either.
It’s generally difficult to go back and properly check for social desirability bias (the real test is with vote intention before an election). However, at least with this straightforward survey mode effect analysis, no presence of this bias can be found.