Cultural and political perceptions of these different cities in Maricopa are fairly entrenched, but are not always tested. To address this, I try to provide concrete information about the political character of eight prominent Marciopa cities: Chandler, Gilbert, Glendale, Mesa, Paradise Valley, Phoenix, Scottsdale, and Tempe. In order to come up with 2016 election data at the city level, I use Arizona precinct data and Maricopa County city district maps to select the precincts that make up each city. (Note: The maps are said to be for the 2012 election, but this shouldn’t matter too much, as I doubt the city lines changed much if at all. At the very least, I did not find a precinct name from the 2012 maps that did not appear in the 2016 precinct data.)
Below is voting-related data for each city on percentage vote for Clinton and Trump, Trump’s percentage point margin, voter turnout percentage, and total votes cast:
Unsurprisingly, Clinton won the college town of Tempe by a 25-point margin as well as the large city of Phoenix by 11.2 points. However, Tempe also had the lowest voter turnout of all the cities examined here, with just 56 percent of its residents voting.
In a bit of a surprise, the city of Mesa–once called the most conservative in the country–ranks as only the third most Republican in the Maricopa cities included here. Clinton received some of her lowest support there, but Trump only got his third largest vote percentage, making for a 16.8 percentage point victory. Perhaps Trump turned off more reliable Republican voters–being relatively less Republican could be 2016-specific–as the non-major party vote was second greatest here (at 9.4 percent, only below Tempe’s 9.6).
Gilbert stands as the most pro-Trump city in this group, which went red by a 19.2 point margin. However, it’s the very wealthy town of Paradise Valley that voted for Trump at the highest rate, with 56.5 percent of its voters choosing the now president-elect. The same town saw the highest turnout rate among this group at 76.1 percent; education and income are very strong positive correlates of turnout, so this makes perfect sense. Given that high socioeconomic status among whites is one of the strongest negative correlates of Trump support, this result strikes me as particularly surprising. Paradise Valley has far and away the highest median household income among these cities at $151,184 (next closest: Gilbert at $82,424) and the highest percent of people age 25+ with a college degree at 71.8 percent (next closest: Scottsdale at 54.2 percent), as well as the highest non-Hispanic white percentage at 89.9 percent. In other, similarly high-SES areas of the country, a much larger Democratic shift occurred. Perhaps party loyalty at the top of the ticket solidified in Paradise Valley instead.
This post originally appeared on Decision Desk HQ, but the link to the article there is broken so I’m re-posting it here.
One of the strongest forces in state level elections has been the nationalization of the vote. Presidential and senate election outcomes have become increasingly correlated in the last decades. The phenomenon of “straight-ticket voting” has sprouted from this, as voters have grown more likely to choose the candidates of the same party all the way down a general election ballot.
By one analysis, straight-ticket voting in terms of presidential and senatorial vote reached an apex in 2016, with no state splitting their vote for these offices for the first time ever. New Hampshire proved symbolic of this year’s this trend. In the race for president, Hillary Clinton garnered 46.8 percent of the vote to outlast Donald Trump’s 46.5. Two races down the ballot, Maggie Hassan (48 percent) barely edged out Kelly Ayotte (47.9) in one of the most expensive and competitive campaigns of the season. That makes for just a 0.2 percentage point difference in the margins of these two races—a telltale sign of the nationalization of the senate election.
But this trend toward nationalization does not necessarily extend to the other key election sometimes on the ballot—the gubernatorial one. Results in state governorship races are much less linked to presidential voting than those in senatorial races are, and 2016 bore this out very clearly. New Hampshire, in fact, had the smallest gap in presidential-gubernatorial voting in the country in terms of Democratic margin (2.6 points). But in such a close party divide across the entire ballot, that proved enough to flip the outcome: while a Democrat won the presidential vote in the state, a Republican in Chris Sununu won the governorship (49 to 46.7 over Democrat Colin Van Ostern). 7404 fewer people cast votes in the gubernatorial election than in the presidential one, but Sununu was able to collect 8250 more votes from New Hampshire residents than Trump did. That begs the following questions: how did this divergent result come about, where did Sununu run ahead of Trump, and where are the indications of split-ticket voting?
To speak to some of these topics, I collected vote totals for 302 townships and wards that make up larger townships available from the New Hampshire Secretary of State Website. I cut that down to 241 townships, matching the same total that recorded data in the 2012 presidential election. I’ll be examining vote shares and margins from the Republican point of view—for Trump, Ayotte, and Sununu.
Republican Vote across the Ballot
To start things off, below are two graphs comparing how closely the presidential vote was matched up with the senatorial and gubernatorial vote. Points correspond to a township, and size of the points corresponds to number of presidential votes cast in that town. Points that fall above the 45-degree red line indicate where Trump ran behind Ayotte/Sununu, and points below the line show where he ran ahead. If they had the same vote share in each township, all points would fall on this line—the further a town is from the line, the greater discrepancy in vote. Nine and seven town points are omitted for the Sununu and Ayotte comparisons, respectively, because they fall outside the scale range of the graphs. However, their exclusion makes little difference in the visualization.
Of the 241 townships in New Hampshire that recorded votes, Trump won 149 of them, Ayotte won 145 of them, and Sununu won 151 of them. As it relates to the above graphs, Sununu gained a greater share of the vote than Trump in 113 towns (47 percent of them), and Ayotte received higher support than Trump in 123 towns (51 percent). However, township size helps explain why Sununu won fewer towns but still got more of the vote. I’ll go deeper into this later, but as seen with the township sizes (and the five towns with the highest number of votes cast that are labelled), these graphs begin to show that Sununu did better in more populous parts of the state.
While not too different in terms of where each township falls on both graphs, one thing is clear: Trump support is more correlated with Ayotte support (0.85) than it is with Sununu support (0.80). Township points are a bit further off the red line for Trump vs. Sununu. Similarly, Trump and Ayotte’s margins for each township are more correlated (0.90) than Trump and Sununu’s are (0.85). This all begins to show the greater dissimilarity in presidential-gubernatorial, enough of a difference to let Sununu win and Trump lose.
Conflicted Townships
With the senate and presidential races more correlated and both going blue in the Granite State, that leaves the bigger divide between the governor and presidential races as the greater point of interest. To home in on the biggest town discrepancies in the governor and presidential elections, below are two tables, the first showing the 14 towns where Sununu beat out Van Ostern and Trump lost to Clinton, and the second showing the 12 towns where Trump won but Sununu lost. The key columns here are the fifth ones over—how many percentage points Sununu and Trump ran ahead of one another. The “Presidential Votes Cast” column isn’t the same as the number of votes in the gubernatorial election, but still signifies the number of voters in a township.
Given that Dixville only has seven votes, there’s not much to make out of the Sununu-Trump difference there. The biggest difference in support of the two Republicans in Sununu’s favor outside of Dixville comes in the town of Newfields, where Sununu garnered 58 percent of the vote but Trump only gained 41.8 percent of it. Clinton got 53.2 percent of the vote here, so there’s clearly some split-ticket voting going on here.
Outside of Dixville, many of these townships in which Sununu won/Trump lost carry a common theme: they are largely located in the southeastern and southern portion of the Granite State. 12 of the 15 towns in the table are located in either Rockingham or Hillsborough Counties, both of which share a border with Massachusetts. This regional population exhibits much higher educational levels, one of the strongest predictors of voting against Trump among whites across the entire US, and higher income levels relative to the rest of the state. It also contains a smaller population native to the state, with its residents moving from surrounding, more liberal states such as Massachusetts at higher rates. By contrast, as seen in the second table, the towns in which Trump ran ahead of Sununu were located primarily outside of the two aforementioned high-socioeconomic counties with a southern border—such as the townships of Bridgewater, Webster, and Whitefield.
Here’s another perspective on what these tables indicate: for towns where Trump won/Sununu lost, there were 605 total people who voted for Trump but non Sununu. In places where Sununu won/Trump lost, 3047 people refused to vote Trump but still cast their ballot for Sununu. There were split-ticket areas that favored both of these Republicans more than the other, but towns that favored Sununu were greater in population—as seen in the votes cast column—and had residents who likely split their tickets in greater margins against Trump (Sununu % Pts. Ahead in the first table were greater than Trump % Pts. Ahead in the second table). In the end, that helps explain why the presidential victor was blue and the gubernatorial one was red.
Population and Sununu Outperformance
The point regarding population is another key one to understanding the presidential-gubernatorial split in New Hampshire. In townships with more people casting votes—a good proxy for population size of an area—there were indications that Sununu ran ahead of Trump. Non-rural areas with greater populations represented the type of landscape in which Trump suffered the most across the entire country, and a similar trend seems to materialize in New Hampshire as well.
I computed several measures to get a sense of Sununu outperforming Trump, and one of them was called “Sununu Raw Net Votes Ahead.” It sounds a little convoluted, so here’s the formula I used to calculate this metric for each town:
Sununu Raw Net Votes Ahead = (Sununu total votes – Van Ostern total votes) – (Trump total votes – Clinton total votes)
This quantifies how much Sununu ran ahead of Trump in terms of raw votes and relative to each candidate’s respective opponent. The y-axis of the below graph represents this metric, while the total presidential votes cast—again, getting at how populous a town is—is on the x-axis. Points that fall above the dotted line are towns in which Sununu ran ahead, while points below are places where Trump ran ahead. As the graph notes, towns labelled in red are ones both Trump and Sununu won, blue represents those where both lost, and purple correspond to towns Sununu won but Trump lost.
These two variables are fairly related, with their correlation coefficient at 0.62, and the figure bears the relationship out. Relative to their competitors, Sununu picks up a lot more raw votes than Trump does in several townships with high amounts of voters.
The use of raw votes here—rather than percent shares—is important to illustrate that a big part of Sununu winning and Trump losing in New Hampshire was what happened in more populous towns, and especially Clinton-leaning ones. Sununu lost a lot less ground than Trump did in some blue populous areas—Nashua, Manchester, Dover—in a losing effort, and even won some of these towns that Clinton won at the presidential level—Hampton, Amherst, and Stratham.
The two Granite State towns with the largest overall population swung had a big role in this process: in Nashua, Sununu gained 2202 raw net votes on Trump, and in Manchester, he gained 1205 raw net votes. Notably, both townships still went Democratic in the presidential and gubernatorial races—many Clinton voters still voted Sununu—and both towns are located in Hillsborough County, where residents have relatively higher socioeconomic levels. Two more strong Democratic towns in Portsmouth (+1594 raw net votes for Sununu) and Hanover (+1484) contributed a good amount to Sununu running ahead, as well as one Republican-leaning township in Bedford (+1859). Once again, all three of these places exhibit higher SES levels than in much of the rest of the state.
Ultimately, Sununu capitalized on these vote-rich and higher socioeconomic areas, flipped the town-level vote preference to Republican from Democratic in the presidential race in some cases, and likely split many individual ballots along the way. Trump ran behind Sununu in more populous towns, losing too many raw votes in the process. As a result, a party split in New Hampshire’s presidential and gubernatorial preferences emerged out of Election Day.
I expanded on my last post regarding state competitiveness/turnout over at Decision Desk HQ. Specifically, I considered targeted GOTV efforts and campaign resource allocation as another variable that could spur change in state turnout–rather than just state competitiveness.
Fun fact: I volunteered for DDHQ during New Hampshire’s three elections (state/national, primary/general) during 2016 and reported precinct results for the township of Hanover. For the February primary, we were the first–at the very least, faster than the Associated Press–to report a surprising Hanover result that spoke to the emerging class divide (among whites) in the 2016 Democratic primary. I highly recommend following Decision Desk HQ on future election nights and for interesting commentary in the meantime.
There were a few interesting tidbits from my analysis that didn’t make in any of my past blog posts. They revolve around some of the themes I touched on in the DDHQ blog: 1) how competition–or perception of it–affects turnout, and 2) tensions between the Electoral College, a popular vote system, and voter participation.
1)
For closeness in a state, I used the margin between the actual vote shares of Clinton and Trump on Election Day. While they may sense the competitiveness that these margins reflect, people don’t know these results ahead of their decision to turn out or not. What they can observe in advance is indications from polls about the state of a race. Below, I consider this possibility by using state polling margins in the final two weeks of the campaign instead of actual vote margins as the independent variable on the x-axis:
Comparing this figure with the first one from my last blog post makes it clear polling competitiveness was less correlated with turnout than actual state results were. While correlation using closeness in actual results was -0.52, using closeness in pre-election polling margins was weaker here at -0.31. That could stem from the fact that state level polling had serious error in many cases, and thus the actual results better reflect competitiveness even if they occurred after people made their turnout decision.
2)
The question of whether feeling your vote matters motivates you to turn out to vote more underlied a lot of this analysis. Part of this question also probes whether the Electoral College disincentivizes people from voting with respect to a hypothetical popular vote system in which citizens would view their vote as more valuable. But what if the vote in states already properly reflects the voting preferences of all people in that state, and not just those of voters? In other words, if everyone considered their vote valuable and consequently turned out, there’s a chance state results wouldn’t change much anyways if actual votes already represent the preferences of the entire state well. To get at this new question, I collected data on percentage identification with the major parties in each state (2014 data) to proxy the political composition of all people in a state. The absolute difference in these percentages indicates if a state is closely split among Democratic and Republican lines (a smaller absolute value) or is decidely partisan in a particular direction (a larger value). This metric goes on the x-axis on the below plot, while the margin between actual major party vote shares in the 2016 election goes on the y-axis:
Do the state electoral compositions produced by the Electoral College reflect the actual political environment in that state–either split or one-sided between Democrats and Republicans? They generally do so well, but not perfectly–the correlation coefficient between the two plotted variables is 0.45.
The next question that naturally follows this lack of a perfect relationship is whether–relative to the partisan balance among all adults in a state–voting results produced by the current system advantages one party more than the other. I attempt to express this for each state with the below chart. To measure the difference between political balance of voters and all adults, I use the following formula:
If the political leanings of voters exactly matched the political leanings of all adults in a state, then the point representing the state would have a value of zero and fall on the black vertical line in the graph. The further left of the line a state is, the more Republican it is among voters than all adults; the further right a state is, the more Democrat it is among voters than all adults.
The pattern is very clear: actual voting results largely overrepresent the Republican lean of all adults in a state. In 42 states, voters/voting results are more Republican than are all adults/leanings of all adults. Thus, in a scenario in which everyone in a state voted, Democrats would benefit much more than Republicans would. This also means that the current system in place–that produces voting results used here–avails Republicans most. Given that all adults are more Democratic than voters only at the national level, a result like this shouldn’t prove too surprising, but does lay bare just how much Republicans benefit from how many and which voters turn out.
Recently, I was talking with someone about the relation between the Electoral College and voter turnout. Following an election in which the candidate who received plurality support did not become president, questions over whether the Electoral College best decides U.S. elections have surfaced in full force. The concern also coincides with another normative dilemma–the fact that only about three in five eligible Americans vote in elections. Perhaps, the thinking goes, the Electoral College disincentivizes participation in less competitive states. If one lives in a very blue- or red-leaning state, it makes little sense to think that a single vote will count much–compared to living in a more competitive swing state.
This sounded like a plausible idea, so I wanted to see what data in this past election and in prior ones had to say about this theory. In the below graph, I plot state competitiveness on the x-axis against state turnout on the y-axis. I define competitiveness as how close the margin between the two major party candidates–Hillary Clinton and Donald Trump–is by taking the absolute value of their difference in percentage support:
I use state turnout measures from the U.S. Election Project Website. Among the different turnout rates listed, I use the Voting Eligible Population (VEP) Highest Office rate, as this appears for every state (unlike the Total Ballots Counted rate).
Although not extremely related, there exists a moderately strong relationship between state competitiveness and turnout in the 2016 election. In states that were decided by closer margins and thus were more competitive, turnout typically proved higher. I overlay a smoothed linear model to capture the negative association between difference in candidate margins and turnout. States like Michigan, Florida, and North Carolina, which were closely contested and had fairly high turnout, fall right on this line. But again, the relationship is far from perfect.
To gauge how common this pattern has been in American elections, I plotted the same variables over the last five elections. That’s what the below grids illustrate, along with correlation coefficients between these two measures for each election year:
It appears that the strength between competitiveness and turnout only began to materialize in the last few elections. In the 2000 election, for example, there was essentially no relationship (correlation of -0.02) between how close a state was and voter turnout. The association grew a bit in 2004, which saw a -0.24 correlation between the two variables, and in 2008 with a -0.28. However, it’s been the 2012 election (-0.51 correlation) and the 2016 election (-0.52 correlation) in which competitiveness and turnout have become much more tightly linked. This is of course bivariate analysis that leaves out plenty of other factors that could influence turnout, but there seems to be some credence to the aforementioned theory: close battleground states generally produce higher turnout among their eligible citizens, while staunchly Democratic or Republican states see their voters stay home on Election Day more often.
There’s a chance this analysis would make more sense by incorporating a lag for state competitiveness. In other words, it seems more theoretically sound that voters would view closeness in their state’s last election, and on that account decide whether to participate in the current election. I tested this possibility (e.g. connection between 2016 turnout and 2012 competitiveness at the state level), but found the relationship was generally weaker than when comparing the variables within the same election year.
In the aftermath of the 2016 election, the focus has frequently turned to vote choice by Americans of different genders. Many emerged from Election Day–and after observing exit poll data–surprised that there wasn’t a larger disparity in vote choice between men and women. More startling was that fact that white women gave the majority of their vote (52-43) to Donald Trump, who did and said things that many people thought would disqualify him in the eyes of women.
Race is the demographic variable that most dictates political behavior, and not surprisingly it sheds light on some of the gender differences in voting patterns beyond just among whites. For example, Joshua Ulibarri wrote about how in the 2016 election, as well as in earlier elections and ballot initiatives, men have voted less Democratic than women among Latinos.
I wanted to check whether past voting data would bear out this pattern, and whether intra-racial gender disparities in vote choice varied across different groups. I was able to do this by using 2008 and 2012 data from the American National Election Survey, which unlike in previous years, included an oversample of blacks and Latinos/Hispanics to make for more certain vote choice estimates. The below 3×2 graph shows the percentage that males and females in each of the three largest racial/ethnic groups voted for the Democratic candidate, the Republican candidate, or another candidate in the previous two election years:
Beyond just intra-racial gender vote differences, it’s important to note that white women voting for Republicans is far from a new phenomenon that materialized in 2016. White females preferred John McCain to Barack Obama by about a nine-point margin in 2008, and opted for Mitt Romney by about a 10-point margin 2012.
Among whites, women have always voted more Democratic and less Republican than men in the last two election cycles. The same pattern comes up when looking at gender vote choice within Hispanics only: Hispanic women vote more Democratic than Hispanic men do. All blacks vote Democratic at very high rates, but the gender difference flips a bit between the 2008 and 2012 elections. To get a better understanding of these differences, I plot the gender difference in Democratic support–female percentage Democratic vote minus male percentage Democratic vote–below:
In 2008, white females supported Obama 6.6 percentage points more than white males did. The disparity proved even larger among Hispanics: female Hispanics voted for Obama 10.7 percentage points more than male Hispanics did. Curiously, the difference is reversed among blacks–black males voted Obama 1.2 points more than black females did–but the separation is too small to make much of.
In 2012, the gender differences become a lot more similar across different races. Relative to men in their respective racial groups, white women voted 6.0 points more Democratic, black women voted 6.4 points more Democratic, and Hispanic women voted 4.9 points more Democratic. Thus, even when controlling for race, women vote more Democratic than men do among all voters. In 2012, ANES data shows that the gender differences by race settled at around the same amount among each racial group.
ANES data has not yet come out for 2016, but we still have the lesser quality (see bottom of this post) but still valuable exit polls from which to glean information–and check for intra-racial gender differences in vote choice again. Here’s the story 2016 exit polls tell for this topic:
According to exit poll data, the intra-racial gender gap seems to have grown in this past election. The gender difference among whites became 12 points, among blacks it was 12 points, and among Hispanics it was smaller at only six points. In light of this more recent data, perhaps the salience of gender-related issues–given Trump’s history and campaign comments and the first major female presidential candidate in Clinton–helped widen the chasm between male and female vote preference within the same racial group.
Unlike in the last post when I just looked at polling error in terms of the margin of victory for Hillary Clinton at the national level, I take a more comprehensive approach to assessing polling error at the state level here. This entails looking beyond just error in candidate vote share margin to error in the vote shares themselves, for example. I’ll boil it down to a general takeaway at the end, but I take this comprehensive look to explore as many meaningful avenues for detecting polling error as I can, while leaving less room for doubt in the final results in the process. Here are the steps I took for this analysis:
1) For each state, I recorded the two average Clinton and Trump vote shares seen in the polls: the average among online surveys and the average among live phone surveys, thus making for four potential data points for each of the 50 states. Polls came from those listed on the HuffPost Pollster website for each state (an option under “Customize this chart” allowed me to narrow down polls to only “Internet” or “Live Phone” ones). I only included polls that were within two weeks of the election. That meant start field dates of the poll had to be no earlier than October 25th, and of course the end field dates were no later than November 7th. All 50 states had several online surveys during the final two-week window, but only 25 states had at least one live phone poll during this time frame. I’ll explain how I deal with this issue when comparing survey modes later. In a few states, the live phone polling average gets represented by only one poll in the final two weeks. This is a caveat worth keeping in mind, as greater error could result from averaging over a smaller set of live phone polls–or just one–compared to potentially greater accuracy from averages of a larger set of online polls.
2) Once I had Clinton and Trump polling vote shares in two different survey modes, I merged this state-level data with the actual state level Clinton and Trump vote shares from the election, taken from the U.S. Election Atlas.
3) From here, I was able to calculate error among online and live phone interviews for each state. Polling error is generally defined as the deviation of polls from the actual results. There are five different specific ways I gauge error.
a) Unadjusted margin: I first computed the Clinton margin of victory–which could be positive or negative–over Trump in each state to represent “actual margin.” I then calculated Clinton margins in polls conducted online–“online polling margin”–and in polls conducted through live phone interviews–“live phone polling margin.” (Note that the margin of victory measure can either be from Clinton’s or Trump’s perspective, as it produces the same error result as long as this stays consistent.) To get error values, I used the following formulas:
I include the term “unadjusted” because I don’t change anything about the vote shares from which margins are computed, such as making alterations in the denominator. That changes in the following error metric.
b) Two-party margin: For this metric, calculations for margins stayed the same as in Unadjusted margin except for one key change: Clinton and Trump vote in the actual election/online polls/live phone polls were calculated as two-party vote shares. In other words, third party and other selections were removed from the denominator. For example, here were the three adjustments I made in Clinton’s case:
Adjusted Clinton share = Clinton actual vote / (Clinton actual vote + Trump actual vote)
Adjusted Clinton online polling share = Clinton online share / (Clinton online share + Trump online share)
Adjusted Clinton live phone polling share = Clinton live phone share / (Clinton live phone share + Trump live phone share)
I did the same adjustments for Trump’s three vote shares. I then created three adjusted (two-party) margins:
Adjusted actual margin = adjusted Clinton actual share – adjusted actual Trump share
Adjusted live phone polling error = | (adjusted actual margin) – (adjusted live phone margin) |
c) Unadjusted Clinton share: In addition to evaluating error in terms of polling margin, I also did so with respect to vote shares for Clinton–and then for Trump–found in polls. Rather than examining the margin separating the two candidates in states, this metric and the next two metrics looks at whether polls properly gauged support levels for the major candidates–and whether online or live phone polls did a better job of this. The formulas for the unadjusted share metrics are very simple. Here is the metric for Clinton:
Online polling error = | (actual Clinton vote share) – (Clinton vote share in online polls) |
Live phone polling error = | (actual Clinton vote share) – (Clinton vote share in live phone polls) |
d) Unadjusted Trump share:
I took the same approach for the metric for Trump’s level of support in elections and in polls:
Live phone polling error = | (actual Trump vote share) – (Trump vote share in live phone polls) |
e) Two-party share: As was the case with Two-party margin, the adjustment for vote share entails looking only at the major party vote. For example, here’s how the calculation looked for Clinton’s vote share:
Adjusted Clinton share = Clinton actual vote / (Clinton actual vote + Trump actual vote)
Adjusted Clinton online polling share = Clinton online share / (Clinton online share + Trump online share)
Adjusted Clinton live phone polling share = Clinton live phone share / (Clinton live phone share + Trump live phone share)
The above three steps were the same as the first free for two-party margin. For two-party share, I went straight to the survey mode error computation after these first three steps:
Adjusted online polling error = | (adjusted actual Clinton share) – (adjusted Clinton online polling share) |
Adjusted live phone polling error = | (adjusted actual Clinton share) – (adjusted Clinton live phone polling share) |
Absolute error turns out being the same whether it’s the two-party adjusted vote share of Clinton or Trump, so I don’t have to repeat this process from Trump’s perspective. That’s why I’ll only call it Two-party share.
Note: I include two-party vote adjustments in addition to unadjusted metrics for polling error because it’s been used to evaluate polling error in the past. In particular, it’s important for errors in vote shares (not margins) as polls could undercount support in candidate polling vote share but still have a good sense of the race when you just look at major party vote. However, I still view the unadjusted metrics as more meaningful, as underestimating support–even when differences in support are correct–still qualifies as error.
Now that I’ve thoroughly explained the process of calculation of my five measures for polling error, I’ll move into the results. The below chart sums up everything I’ve found regarding polling error by mode. The error values are broken up by 1) sample of states contained, 2) polling error metric (of the five different kinds described above), and 3) survey mode.
Data sources: HuffPost Pollster, U.S. Election Atlas.
In the upper half of the table above, I compare the mean absolute errors among only the 25 states that had both at least one online and one live phone poll in the final two weeks of the campaign. This is done for a fairer and more direct comparison (e.g. perhaps the 25 states without live phone polls during this window were harder to poll, and increased error for online polls while not doing the same for live phone polls). In the bottom half of the table however, I expand the mean absolute error for online polls to all 50 states, while keeping the live phone average at the 25 states that included this type of survey in the last two weeks.
Analysis:
There are 10 different points of comparison here for online and live phone polls. In seven of them, online polls have a smaller error and therefore were more accurate. In the most fairest level of comparison–using only the 25 states where both modes can be tested–online polls perform better than live phone polls across all five metrics.
For example in gauging the unadjusted margin between Clinton and Trump (comparing only across the 25 states), online polls erred by 3.87 points, but live phone polls were further off the mark with a 4.53 error. In measuring support for Clinton (in the unadjusted measure), online polls had a 2.01 mean error at the state level, while live phone polls had a 3.07 mean error. In terms of unadjusted Trump support, online polls (3.26 error) were also more accurate than live phone polls (4.38 error).
The bottom half of the chart shows that expanding to all 50 states to evaluate online polls results in online polls having greater error in three of the five metrics than live phone polls. But because live phone surveys weren’t included for these 25 other states, it does not really make for a fair comparison to include these states in the online poll error averages.
Key takeaway:
At the state level, polling was generally more accurate when conducted through online interviews than when conducted through live phone interviews. The differences in error weren’t drastic by any means, but are nevertheless consistent and strong across the board.
Caveat:
One important thing to point out in this finding is that I’m of course not making a causal claim about whether one type of survey administration reduced error more than another. To do that, I would have to control for in-house effects for different pollsters (i.e. accounting for the direction pollsters are typically biased), registered vs. likely voter populations, and other factors. Nevertheless, comparing the overall averages for online and live phone surveys and demonstrating error is still valuable in getting an overall picture of how close different survey modes got to the final result. That’s especially the case given that online polls are innovative and much less tested methods, and that live phone polls have the better track record of accuracy.
Just to better understand these results and visualize them in a way that includes state-specific survey mode comparisons (i.e. not just averages across all states), I’ll include a few graphs quantifying polling error by mode and state. I’ll use the three unadjusted error metrics from above. In all of these graphs, the closer the state abbreviations are to the x-axis of zero, the less error they had and thus the more accurate they were.
Unadjusted margin
On average, the error for this metric was greater among live phone polls than among online polls, but that wasn’t consistently the case for each state. For example, online polls were more accurate in Colorado, Georgia, Nevada, Utah, and Wisconsin, while barely outperforming live phone polls in North Carolina and Pennsylvania. Live phone polls outperformed online ones in New Hampshire and Ohio, and slightly more in Florida and Virginia. There was almost no difference in Arizona and Iowa.
Unadjusted Clinton share
For error in terms of estimating Clinton share of support, the states where online and live phone polls were more accurate were similar to those for the margin error metric. The only notable switches (in competitive states) were in Georgia, where live phone polls were more accurate, and in New Hampshire, where online polls were more accurate.
Unadjusted Trump share
Finally, there aren’t a lot of differences between state-level error in Trump support, seen in the above graph, and in Clinton support, seen in the previous graph. One thing worth noticing is the difference for the state of Wisconsin, the election result of which represented one of the bigger surprises. While error wasn’t that much different by survey mode for Clinton support, online polls were several points more accurate than live phone polls were for estimating Trump vote share in the state.
With the country emerging from the 2016 election looking more divided than ever, some fissures have received more attention than others. Recently, the growing “generation gap” in vote choice has been noted by David Hopkins as one of the larger and under-acknowledged divisions in the electorate. Preliminary data in the form of exit polls more or less confirms this notion, as Hillary Clinton gained 55 percent of the vote from voters ages 18-29 compared to Donald Trump getting 37 percent. It’s worth contextualizing this trend in its broader history.
Below, I show how young Americans voted in elections from 1964 to 2016. All data comes from the American National Elections Studies except for 2016, which is based on exit poll data. Ideally, all data in the same graph would come from the same source, but pairing ANES data with 2016 exit polls allows me to go further back in time while still including the most recent election data (i.e. there’s no ANES data for 2016 yet–but they’ve already started on administering surveys).
Up until 1992, the youth vote was fairly torn between the two major parties, though it tilted more toward favoring Republican presidential candidates. In the seven pre-1992 elections shown here, Republicans won the youth vote five times. The one aberrant result was in 1964, when Democrats won the youth vote by a 73-27 margin–that perhaps could be ascribed to the nature of a landslide victory for Lyndon Johnson.
The election in 1992, however, heralded a change. From 1992 onward, Democrats have always claimed the majority of the youth vote, beating Republicans among this group by no less than 18 points in each election during this span. Support for Democratic candidates from 18-29 year olds has steadily risen since 1980 and hit a high point in 2008 with Barack Obama’s victory. However, Democratic control of the youth vote has declined a bit thereafter, with Obama doing slightly more poorly with young voters during his re-election bid. Waiting for the 2016 ANES data will give a better sense of this, but if exit polls provide any good indication–and generally they are (in some areas)–it seems that the decline has continued into the 2016 election. Hillary Clinton gained slightly less of the youth vote for the Democratic Party, and Donald Trump gained slightly more of it for the Republican Party.
Based on this early data, the generation gap–and namely the recent Democratic dominance of the youth vote–may be slowly shrinking. That might not be all that surprising, as in the last 50 years or so, no one party has ever exhibited consistent and overwhelming control of the youth vote.
At the same time, it’s crucial to note there’s a new factor influencing youth movement to Democratic candidates in the last few decades: racial diversification. Both in the general U.S. population and in the U.S. electorate, younger age groups are much more racially diverse than older ones in the present day. This is very important for understanding changes in the youth vote, as these racially diverse groups–Hispanics, blacks, and other racial/ethnic minorities–are much more attached to the Democratic Party than whites are and espouse more liberal ideologies than whites do. Thus, it might not be that young voters are getting more liberal and Democratic, but rather changing racial composition might be driving this change in political behavior.
The importance of controlling for race when examining youth vote is supported well by exit poll data seen in a tweet here by L.J Zigerell. When broken up into the three major race/ethnicity groups, it becomes apparent that there isn’t much difference in vote choice by age group. What drives a more Democratic youth vote likely has more to do with racial composition of different age groups. Below, I expand on this idea a bit by using 2012 ANES data to look at vote choice, partisanship, and political ideology of 18-29 year olds broken up by white and non-white race.
Source: ANES.
While 18-29 year olds might have voted for Obama more than any other age group, disparities occur when this group’s behavior gets split up by race. While Republicans won white youths by about a majority, Democrats won non-white youths overwhelmingly, winning more than three times as many than Republicans did.
Source: ANES.
A similar story emerges for partisanship distribution: while more white youths identify as Republican, more non-white youths identify as Democrats and by an even larger margin.
Source: ANES.
Finally, looking at political ideology reinforces the same notion found when looking at the other two political measures: race divides the ideological disposition of youths. While more white 18-29 year olds see themselves as ideologically conservative than liberal, many more non-white youths identify as liberal than they do conservative.
All in all, this data on political behavior–along with the key plot from this tweet concerning the 2016 election–points to the same overarching conclusion: race, not age, is more important for understanding vote choice and political views of the youth. Once you control for race/ethnicity, the generation gap becomes a lot less wide.
Moreover, regarding the point I made earlier that the generation gap could be shrinking, this could be related to race. There is considerable evidence that the exit polls misrepresented the vote choice of Hispanics, inflating their support for Trump and underestimating their support for Clinton. Given that Hispanics have made up a disproportionately large share of the 18-29 age group, this exit poll underestimation of Democratic Hispanic vote could be making the 18-29 age group vote less Democratic than it really is. Once the higher quality ANES survey comes out, there’s a chance the same slight decline in youth Democratic preference–from exit polls–doesn’t show up.
The proliferation of online polling has marked a key point in the recent transformation of public opinion and survey research. The traditional method of reaching the population through random digit dialing and live landline telephone interviews has all but fallen by the wayside, prompting researchers to turn to cellphone surveys and explore innovative methods through the internet. There’s a common refrain that online polling is the future, but the same area has the most room for improvement–in increasing reliability and coverage of the U.S. population. Significant differences still emerge in survey respondents reached through web as opposed to telephone methods, and errors often occur for minority subgroups.
Perhaps the clearest way to test the accuracy of different survey modes–self-administered online surveys versus interviewer-administered telephone surveys–is through elections. Polls of all kinds get conducted in the lead-up to elections, giving a sense of public mood before actual results are tallied. Election results can then clearly test the accuracy of these estimates of public opinion, telling us whether one survey mode better captures voter preferences better than the other. Such is the opportunity that the 2016 U.S. presidential election presented.
In the below graphs, I first plot national poll estimates of Hillary Clinton’s lead over Donald Trump–among polls whose starting field date was within the final week of the election (before November 8th)–relative to Clinton’s actual popular vote margin. I then do the same but expand the poll field date window to two weeks before the election. Pollster names in red correspond to online survey methods, while those in black employ live phone surveys. The blue dashed vertical line represents Clinton’s actual margin, and the text in green spells out the average Clinton margin for polls in the graph broken up by survey mode.
The above plot shows that live phone surveys on average came closer to Clinton’s actual margin of victory in the popular vote than online surveys did. Relative to Clinton’s 2.08 margin, live phone surveys in the final week averaged to show a 3.1 lead for her, while online surveys showed a 4.29 lead. That makes for a 1.02 percentage point polling error for live phone surveys, and a larger 2.21 point polling error miss for online surveys (all absolute value errors). McClatchy/Marist (11/1-11/3) and FOX (11/1-11/3), both of which use live phone interviewing, came closest to the actual margin in showing two-point leads for Clinton.
In polls conducted during the final week of the campaign, it’s thus fair to say live phone surveys were more accurate than online ones at the national level.
Note: The 2.08 margin for Clinton is as of 12/10/2016, and could be updated to reflect new votes being counted.
In the above plot, I expand the scope of examined polls to include those conducted within the final two weeks of the campaign–their start field dates had to have been no earlier than October 25th. A similar story regarding survey mode accuracy appears in this case: live phone surveys came closer to Clinton’s margin of victory than online ones did. While online polls averaged to show a 3.86 lead for Clinton (1.78 percentage point polling error), the average of live phone polls during this span resulted in a 2.87 Clinton lead (0.79 point polling error) which proved more accurate.
Thus, when considering polls conducted both during the final week and the final two weeks of the campaign, live phone surveys have proven more accurate than online surveys in gauging national level vote preference. It’s just one election, but at least in regards to assessing public opinion of the electorate, the more traditional method in this case has emerged as the more reliable one. In another post, I’ll do a similar test of survey mode accuracy for state-level polling, which was more error-prone than national polls.
Media outlets have recently seized upon a growing post-election trend: the rising favorability of president-elect Donald Trump. And it’s not a matter of cherry-picking a poll or two to make this point. In polls since the November election measuring the favorability rating of the incoming president, Trump has been increasingly more warmly received by the American public. Here’s how the favorability rating trend line for Trump looks like:
Source: HuffPost Pollster.
Given his standing as the least liked presidential candidate in recorded history going by these same ratings from during the campaign, this trend is notable–although not altogether surprising. It’s worth bearing in mind candidates coming off an election victory often get positive favorability bounces. For example, after the 2008 election, newly elected Barack Obama began to get even more well-liked by the public relative to the pre-election run-up:
Source: HuffPost Pollster.
That post-election favorability bump for Obama can be discerned right before the “Feb. 1 2009” part of the graph, where his favorable numbers (black line) rise and unfavorable ones (red line) drop. Some of that positive swing toward Obama began before the election, but most of it occurred after: in a little under two months’ time, Obama went from a +31.5 net favorability rating to a +46.9 rating. Though it’s only one example, the case of Obama’s post-election image goes to show that the American public very often warms up to a newly elected president.
That pattern seems to be a strong one too, as it even seems to be occurring now for the most disliked presidential candidate in history in Trump. What’s received less focus, however, is the drivers behind this tide of greater favorability for the newly elected president–which demographic and political subgroups, if any, have most spurred this change? Below are a few tables containing data for some of these key subgroups that try to answer this question.
The data comes from two Politico/Morning Consult polls conducted before (11/4-11/5) and after (12/1-12/2) the election to get a clear sense in the changes in favorability toward Trump. The columns titled “Before” and “After” contain the net favorability ratings recorded in the pre- and post-election polls, respectively. Net favorability was calculated according to the following equation:
The “Net Change” column represents the favorability swing for Trump, subtracting “Before” from “After.” To make this change value for each subgroup easier to understand, I created a “Relative change” column that accounts for the +24 point overall favorability swing among all voters (i.e., I subtract +24 from “Net change”). “Relative change” thus represents a more meaningful favorability swing that is relative to the +24 baseline.
Source: Politico/Morning Consult (11/4-11/5) and Politico/Morning Consult (12/1-12/2).
The above and below charts document these relative net favorability swings (“Relative change”) for Trump among different subgroups. In terms of gender, men (+4) have become more favorable to Trump after the election than women have (-3). In terms of age groups, older people–such as in the 45-54 and 65+ brackets–have warmed up to Trump more than younger people have. The -9 net swing relative to the baseline among those ages 55-64 is a surprising outlier given Trump usually receives a more positive reception among older age groups. Perhaps he didn’t have that much more room to grow in this age bracket, or this could represent a more anomalous result.
Parsing through changes among partisan groups is a little difficult because it overestimates people identifying as Independents as a result of Morning Consult not grouping Democrat-learning and Republican-leaning Independents in their respective party groups. However, Democrats clearly lag behind the overall shift in terms of not warming up to Trump as much as all voters have. As for political ideology groups, conservatives moved a relative seven points more favorable toward Trump after the election, the most positive movement of any of the ideological subgroups.
Source: Politico/Morning Consult (11/4-11/5) and Politico/Morning Consult (12/1-12/2).
Examining favorability swings by education returns an interesting result: people with post-graduate education became a relative nine points more favorable to Trump when comparing these pre- and post-election polls. That same positive movement toward Trump in a high-SES subgroup gets mirrored for income subgroups: the most positive movement toward Trump occurs among those with incomes over $100,000, who became a relative 15 points more favorable toward Trump. Though because lower SES status is so correlated with non-white race identification, it’s hard to glean something really meaningful from these education/income net swings without have the same breakdown for race by education (e.g., for non-college whites, $100k+ whites, etc.).
Finally, examining relative net favorability swings by race comes up with a weird result. The meaningful part here is that Hispanics have warmed a lot less up to Trump relative to overall changes (a relative net 12 points fewer). Below-average movement toward Trump among all three major racial groups (as represented by negative values) seems unlikely however (the positive swing has to come from somewhere). So too does the +29 value among those of an “Other race.” The asterisk there also indicates a small subgroup sample size for “Other race,” which could be producing this excessively large swing. In other words, net relative swings described for racial groups might be less meaningful than for other subgroups, thought the one for Hispanics remains notable.
There is often talk of two prominent generalities in politics that don’t always comport with one another:
The more people that vote in elections, the better change Democrats have of winning.
Republicans have an untapped potential in mobilizing the “missing white voter”–a part of the general population that’s not always part of the electorate in full force.
On the surface, these two very general ideas both have some merit. The first point makes sense for several reasons. While party affiliation data among voters shows a more even distribution between Democrats and Republicans, the same data but for all Americans reveals a more Democratic bent to the wider U.S. public. In other words, the U.S. public as whole is more Democratic than the portion of the public that participates in elections. Additionally, groups that turn out to vote at lower rates–Hispanics, Asians, and other non-white/non-black races–lean more toward the Democratic party. If more people vote in elections, the presence of these minority groups in the electorate would have to rise and thus create a greater Democratic tilt.
The second point regarding the missing white voter is much more speculative, but does have some credence. After all, whites make up a majority of the U.S. population, and an even larger slice Americans who vote: 74 percent of voters in the 2012 election were white, though that share has continuously dropped in each general election since 1988. Whites identify as and vote Republican at much higher rates than the rest of the population. Many white voters still don’t vote in elections however, and that fact is greater for non-college educated whites who have an even more Republican character than college educated ones do. Thus, the GOP could benefit from tapping into this large portion of the population that doesn’t vote at especially high rates but that could presumably lean more toward voting Republican. This could have materialized to some extent in the 2016 election in helping elect Donald Trump.
It’s worth mentioning another thing regarding the first point: the low-turnout minority groups aren’t that large, and by themselves couldn’t make up the difference seen between the public and electorate partisanship distributions. The first idea must involve a story about white voters outside the electorate trending more Democratic in order to be true, placing it at odds with the second idea.
That begs the following question: if more white voters entered into the electorate, would that cause a shift to the political left or right? Available evidence seems to point to a likely shift leftward for the electorate if this were to occur.
Related to very interesting research based on merging vote preference data from the American National Election Studies with turnout data from voter files, Spahn pointed out the following: regarding the 2012 election, whites without a college degree who did not turn out to vote preferred Barack Obama over Mitt Romney by a 59-41 margin. For whites without a college degree who did participate in the election, they preferred Romney by a 57-43 margin. The white working class–as defined as whites without a college degree–that was missing from the electorate preferred Democrats by a 2 to 1 margin. Thus, if white non-college citizens entered the electorate at a higher rate, Spahn’s data suggests it would advantage Democrats, not Republicans. Considering non-college whites are one of the GOP’s most reliable demographic bases, this finding sheds very interesting and surprising light on the portion of the American public who’s not voting in elections.
I wanted to expand on that finding a bit with available data. While I don’t have access to voter file data/vote validation data, I can look at the 2012 ANES to see how the white working class voters and non-voters compare on dimensions other than vote choice. That’s what I show below using four variables measuring partisanship and political ideology. Again, I define the white working class as white respondents in the ANES survey whose educational level stands at anything below having earned a college degree.
The first dimension represented above is partisanship on a seven-point scale, ranging from “Strong Democrat” to “Strong Republican” classifications. One aspect that immediately stands out is how much more non-voters identify as Independent without any lean toward either of the major parties; they’re more Independent by 17.5 percentage points. Relative to the white working class who voted in the 2012 election, non-voters are also less likely to identify as strong partisans, but the disparity is greater for Republican identification. Non-voters identify as Strong Republicans 16.3 percentage points less than voters do, but identify as Strong Democrats only 6.5 percentage points less.
The second dimension above builds on the first: this partisanship indicator groups Independents who lean toward a party, strong partisans, and weak partisans in that party, shrinking seven groups down to three. Once again, the white working class citizens who did not turn out in the 2012 election are much more Independent than those who did. Similarly, while non-voters are less attached to both parties compared to voters, they’re much less relatively attached to Republicans than they are to Democrats.
The third dimension turns to another aspect of political behavior–a liberal to conservative continuum of ideology. This variable allows people to identify with seven different ideological groups, from extremely liberal to extremely conservative. Non-voters are 16.6 percentage points more likely to see themselves as ideologically moderate than voter are. This accords with the two prior dimensions in producing an image of non-voters as possessing weaker political ties–an unsurprising finding given that they opted not to vote in the 2012 election. However, there exists a more liberal tilt to the working class white ideological structure for non-voters than the same for voters. For example, non-voters are 10.3 percentage points less likely to see themselves as “Conservative” than voters do. Every point on this spectrum–outside of the “Moderate” classification–shows a dropoff in ideological ties going from voters to non-voters except for one point: the “Slightly Liberal” self-described ideology. While 8.8 percent of voters see themselves as “Slightly Liberal,” 13.3 percent of non-voters do.
Finally, this fourth dimension above more adequately captures what the third one was trying to say: among the white working class, non-voters are much less ideologically conservative than voters are. Grouping the three ideological groups for liberal and conservative from the prior seven-point scale, the above graph shows that voters and non-voters are not all that different in terms of liberal ideology. However, non-voters are 15.5 percentage points less conservative than voters are among working class whites. Notably, non-voters prove more left-leaning on ideological dimensions than they do on the partisanship dimensions described beforehand.
All in all, this limited analysis using ANES data lends more credence to what Spahn’s tweeted out data suggested: the portion of the white working class that doesn’t vote is more Democratic and liberal than the working class whites who do vote. In other words, adding more working class whites to the electoral fold would not necessarily benefit Republicans.