The level of shock attached to the 2016 election result surely had a lot to do with the once inconceivable and conventional wisdom-breaking phenomenon of Donald Trump becoming the 45th president of the United States. But tied into this notion of an unprecedented Trump presidency was what public opinion polling–a staple of the modern media’s election coverage–was saying in the run-up to last Tuesday.
Presidential vote intention polls oscillated plenty starting from the first months of 2016 through the week before the election. But even then, the signal concerning the nation’s vote preference mood was clear: Hillary Clinton maintained around a three to four percentage point lead over Donald Trump in terms of national popular support. Given national vote share’s typically strong relationship with Electoral College fortunes, electoral success seemed justifiably likely for Clinton. That’s why right before the election, the major forecasts–either largely or entirely derived from polls–pegged Clinton’s chances anywhere between 71 and 99 percent probability of winning the election.
As Tuesday night progressed, Trump emerged victorious in the early morning of Wednesday, and Americans looked back at what happened, much attention has been shifted to the polls. Clearly, it seemed polls measuring people’s vote preferences were off in some way, perhaps larger than they every have been, and that this error explained the unexpected. That’s certainly part of the story, but it’s a bit more nuanced than that.
Most importantly, at the national level, talk of a large polling error is exaggerated. Though Clinton’s popular vote lead only stands at about 0.5 percentage points right now, that edge will continue to expand as more votes are counted. It’s possible it reaches around one to two percentage points, which makes for about a two percentage point polling miss at the national level (compared to her three/four point lead in the polls). Not only would this level of deviation fall within the margin error for what most polls were saying before the election, but the polling error could also prove smaller than the one in the 2012 election. The big difference is direction: polls were biased against Obama in 2012, but biased against Trump in 2016.
Instead, polling error materialized differently across different states. Importantly, it wasn’t systematically in one direction, and rather biased against Trump and Clinton in different states. The election swung, however, on the direction of the bias where the state races were closest. In comparing the polls to the actual election results, the error was largely biased against Trump in swing states whose pre-election polls for the most part showed a Clinton victory on the horizon.
First, a few notes on how I’m going about this analysis:
- My estimate of “polling error” is based on comparing the Democratic (Clinton’s) percentage point margin of victory in pre-election polls to the actual Democratic margin of victory in the election for the main part of this analysis. For some parts of this analysis, I can clearly show where Clinton’s support was underestimated and where Trump’s was, such as in the below heat map of polling errors:
- You’ll notice that for this map, the polling error pertains to one of the two main aggregators of election polls, HuffPost Pollster, the other one being RealClearPolitics. While similar for the most part, Pollster includes some online polls that RCP does not, RCP includes some landline-only polls that Pollster does not, and Pollster also uses a trend line estimate that’s close to but not the same as the averaging of polls RCP employs. In later parts of this analysis, I will take the average of these estimates of state-level races to give a general sense of “what the polls are saying.”
- I see four different ways to measure state polling errors: 1) Democratic margin (actual Democratic margin minus Democratic margin in polls), 2) absolute margin (the absolute value of the above equation and thus ignoring the direction in which direction the polling was biased), 3) Democratic vote share (actual Clinton vote share minus Clinton vote share in polls), and 4) Republican vote share (actual Trump vote share minus Trump vote share in polls).
- In parts of the analysis where there is an average of the Pollster and RCP reads of polls, Alabama, Hawaii, North Dakota, and Wyoming are excluded because RCP did not have poll averages for these states, and thus I could not average this aggregate polling picture with that of Pollster.
- Final note: this polling error data that includes actual election results is based on data as of about Thursday, November 10th. While a bit old a few days later, this amounts to very minor discrepancies in polling error calculations.
Here’s what gives the best idea of what happened with state-level polls in the election:

Here I’m using the Democratic margin of victory as a way to measure polling error. States that fall on the 45-degree line in the above chart are those whose pre-election polling estimate matched their actual voting results last Tuesday. If states fall above the line, the polling error was biased against Clinton and underestimated her margin. If they fall below the line, then the error was biased against Trump. States in red indicate where the margin of victory for either candidate was 10 percentage points or less–loosely defined swing states (since there are more here than usually qualify as “swing”). The x-axis variable, Democratic Vote Margin in Polls, is taken from the average of Pollster and RCP state poll trend lines/averages. As explained before, this is a composite indicator of what state polls were showing.
In several very blue states, such as California, New York, and Washington, the polling error was in fact biased against Clinton. In the more solidly red states, the polls missed much more so on Trump’s margin of victory. The problem for Democrats–and where the election was won–came in the middle: of the 17 states that were decided by 10 percentage points or less, the error was biased against Trump in 15 states. In 10 of these states, Trump won. In five especially crucial ones–North Carolina, Florida, Pennsylvania, Michigan, and Wisconsin–a projected Clinton win based on polls did not match the actual result of a Trump victory. During the last two elections for these five states, Democrats won nine times out of the total 10 (the only loss coming in 2012 in NC from this set). In 2016, all five states swung to Trump, and that proved decisive for his electoral victory and for cementing where the polling error mattered most.
There are a few other ways to look at the polling error and the general form it took. Here I come back to the four different approaches to evaluating error I identified before. Differentiating by poll aggregator–which as a whole combine to form my “position of pre-election polls” proxy in the prior graph–could also prove informative (I include a column for the average of these as well). The mean and standard deviation of polling errors also adds another layer of understanding–how far off polls were from the final result and how widely did their deviations from the actual result vary, respectively. That’s what I show in the below table:

Polls were off by an average of -4.64 points in terms of Democratic margin by state, and thus 4.64 points biased against Trump on average. That error was larger for HuffPost Pollster’s trend line estimates. This makes sense given what these estimates attempt to do: smooth out the incoming stream of polls and not react as quickly to them as simple averaging would. With late voter deciders breaking toward Trump according to exit polls, this type of estimate would be less suited to pick up this kind of movement. Generally speaking, given the stability of pre-election polling and noisy swings, this would qualify as the more prudent approach. But this wasn’t as helpful an approach in this election. Reliance on polls meant depending on numbers that in hindsight now seem likely to have systematically underestimated Trump support for some time–and thus you couldn’t completely chalk all this up to a story of differential non-response.
But back to the above table. The absolute error regardless of the direction of the bias came out at 5.85 points, and HuffPost Pollster estimates once again had greater errors than RCP averages. Regardless of this comparison, a state-idea average of 5.85 points is a lot. The lack of a parallel comparison to 2012 and earlier elections precludes any definitive judgment, but that type of error remains fairly large. One caveat to keep in mind however: some states had little polling done before the election, and thus their estimates cannot represent as reliable ones as those in the heavily polled battleground states, for example.
For polling error in terms of estimates of Clinton/Democratic and Trump/Republican shares of the vote, greater errors occurred for estimating Trump support. The average polling error was 1.83 points for Democratic vote based on polls, and a much larger 6.47 points off the mark for Republican vote.
Standard deviations provide a picture of how widely the polling errors varied. If that value is small, then polls were off by similar amounts across all states. If the value is large, then polls deviated from the actual result to much more different degrees. For Democratic margin, absolute margin, and Republican share of the vote, the HuffPost Pollster errors in poll estimates varied less than those from RCP. At the average level, it’s once again difficult to claim much without historical context, but a 5.75 standard deviation for the Democratic margin error is still considerable.
Finally, just to conclude with evaluating the nature the errors took themselves, here’s the distribution of the absolute polling margin errors (inattentive to the bias’s direction):

It’s definitely not the case that there were large polling errors across all states, as a lot of state-level errors are in single digits in terms of their absolute value. However, once you start to move further right on this histogram and away from 1-3 point errors, a margin of error explanation proves unable to solve why errors of these magnitudes occurred. Instead, other more serious (and actually problematic) causes might have driven the errors aside from the sampling error intrinsic to public opinion polling.