The proliferation of online polling has marked a key point in the recent transformation of public opinion and survey research. The traditional method of reaching the population through random digit dialing and live landline telephone interviews has all but fallen by the wayside, prompting researchers to turn to cellphone surveys and explore innovative methods through the internet. There’s a common refrain that online polling is the future, but the same area has the most room for improvement–in increasing reliability and coverage of the U.S. population. Significant differences still emerge in survey respondents reached through web as opposed to telephone methods, and errors often occur for minority subgroups.
Perhaps the clearest way to test the accuracy of different survey modes–self-administered online surveys versus interviewer-administered telephone surveys–is through elections. Polls of all kinds get conducted in the lead-up to elections, giving a sense of public mood before actual results are tallied. Election results can then clearly test the accuracy of these estimates of public opinion, telling us whether one survey mode better captures voter preferences better than the other. Such is the opportunity that the 2016 U.S. presidential election presented.
In the below graphs, I first plot national poll estimates of Hillary Clinton’s lead over Donald Trump–among polls whose starting field date was within the final week of the election (before November 8th)–relative to Clinton’s actual popular vote margin. I then do the same but expand the poll field date window to two weeks before the election. Pollster names in red correspond to online survey methods, while those in black employ live phone surveys. The blue dashed vertical line represents Clinton’s actual margin, and the text in green spells out the average Clinton margin for polls in the graph broken up by survey mode.
The above plot shows that live phone surveys on average came closer to Clinton’s actual margin of victory in the popular vote than online surveys did. Relative to Clinton’s 2.08 margin, live phone surveys in the final week averaged to show a 3.1 lead for her, while online surveys showed a 4.29 lead. That makes for a 1.02 percentage point polling error for live phone surveys, and a larger 2.21 point polling error miss for online surveys (all absolute value errors). McClatchy/Marist (11/1-11/3) and FOX (11/1-11/3), both of which use live phone interviewing, came closest to the actual margin in showing two-point leads for Clinton.
In polls conducted during the final week of the campaign, it’s thus fair to say live phone surveys were more accurate than online ones at the national level.
Note: The 2.08 margin for Clinton is as of 12/10/2016, and could be updated to reflect new votes being counted.
In the above plot, I expand the scope of examined polls to include those conducted within the final two weeks of the campaign–their start field dates had to have been no earlier than October 25th. A similar story regarding survey mode accuracy appears in this case: live phone surveys came closer to Clinton’s margin of victory than online ones did. While online polls averaged to show a 3.86 lead for Clinton (1.78 percentage point polling error), the average of live phone polls during this span resulted in a 2.87 Clinton lead (0.79 point polling error) which proved more accurate.
Thus, when considering polls conducted both during the final week and the final two weeks of the campaign, live phone surveys have proven more accurate than online surveys in gauging national level vote preference. It’s just one election, but at least in regards to assessing public opinion of the electorate, the more traditional method in this case has emerged as the more reliable one. In another post, I’ll do a similar test of survey mode accuracy for state-level polling, which was more error-prone than national polls.