Clarifying the Relationship between Partisanship and 2016 Vote Choice with Panel Data

When looking at polling crosstabs with party identification breakdowns of key variables like vote choice and approval ratings, many often conclude a strong impact of partisanship. This data shows that Democrats largely express support for Democratic candidates in elections and leaders, while Republicans express support for their candidates and leaders. Yet conclusions about this relationship could suffer from the issue of endogeneity (I’ve discussed this before here). Inferences about how partisanship affects an outcome Y need to assume stable partisanship, and that the outcome Y–such as vote choice–does not in turn affect partisanship. The possibility of reverse causation would mean that people first arrive at a decision to vote for Donald Trump, for example, and subsequently update their partisanship to match with their candidate preference. If this occurs, then the previously assumed “exogenous” nature of partisanship as an unmoved predictor becomes dubious. Maybe individuals who were originally Republicans but did not support Trump changed their partisanship, and as a result, partisanship became a mere reflection of vote choice, and not a stable underlying predisposition in a way that would be meaningful.

A similar concern has been raised regarding approval rating polls showing strong intra-party support for Trump. Original party base members may no longer identify as Republicans on surveys, and thus Republican party identification simply comes to mean support for Trump and not a meaningful underlying political trait. Cross-sectional survey data cannot overcome this problem, as it lacks a measure of an individual’s preexisting partisanship. Panel data, on the other hand, can better address this issue. Cross-sectional data uses contemporaneous 2016 measures of partisanship and vote choice (recorded at the same time) to say that 90 percent of Republicans voted for Trump. But the better approach would be to use a pre-2016 measure of partisanship–unaffected by Trump–and calculate how vote choice breaks down along this variable that better represents an underlying indicator of partisanship. Publicly available panel data–with waves in December 2011, November 2012, and December 2016–from the Voter Study Group (VSG) offers such an improvement in capturing party voting (the rate at which partisans vote for co-party candidates).

Specifically, I can compare how party voting looks like using both a 2011 measure of partisanship (before both the 2012 and 2016 elections) and a 2016 measure of partisanship (that is purportedly endogenous to 2016 vote choice). If there are large differences in party voting across these measures, then an endogeneity problem exists, suggesting that partisanship is shaped in response to vote preference. If small or no differences result, then partisanship constitutes a more exogenous variable–in line with the stable over time character that much of political science literature (and other evidence) suggests.

The VSG data offers mixed evidence but mostly lies in favor of the latter conclusion. I’ll start with the broadest perspective–overall rates of party voting: Republicans voting for Trump and Democrats voting Clinton as a percentage of all partisans who reported voting in 2016. If I use the 2016 measure of partisanship, I find that 89 percent of partisans voted their party in 2016. If I use the 2011 measure of partisanship, 84 percent of partisans voted their party in 2016. Thus, it appears that a small percentage of people shifted their partisanship to match their vote preference in 2016, and in a way that would slightly inflate the impact of partisanship on voting. But it’s fairly small, as the party voting rates remain similar.

Breaking these party voting rates by party and candidate reveals a similar picture, but with some additional information. The 2016 measures of partisanship suggest very high rates of party voting, with 90 percent of Democrats voting Clinton and 88 percent of Republicans voting Trump. That rate declines a bit when using a pre-Trump (2011) measurement of partisanship: 83 percent of original Democrats opted for Clinton, while 84 percent of original Republicans went Trump. These percentages are not that different, but at least some partisanship updating is likely at play. What’s more interesting is how this pre-Trump underlying partisanship better captures defection from the Democratic Party (in a way that–not balanced out by similar defection among Republicans to Clinton–could have tilted the election just enough to Trump). If we use a 2016 party measure, then we would conclude that seven percent of Democrats voted Trump. Using the 2011 measure of the original Democratic party base, however, nearly doubles that size, revealing 13 percent of Democrats who voted for Trump in 2016. This panel approach can thus offer a more meaningful estimate of how many original Democrats defected from the party in voting for the out-party candidate in 2016.

Finally, I wanted to further break down this comparison by using the full seven-point party identification scale. The below plot shows how each party identification group (of the seven in total) voted in 2016 when using an individual’s 2016 reported partisanship.


Differences in party voting rates at this partisanship subgroup level appear when comparing the above plot to the same breakdown but with an individual’s 2011 reported partisanship, as the below plot illustrates:


If we use 2016 cross-sectional data, then we end up with an overestimate of how closely “Strong Democrats” and “Not very strong Democrats” adhered to their party affiliation for deciding whom to vote for. This approach would say that 97 percent of strong Democrats and 79 percent of weak Democrats voted their party in 2016, as opposed to 90 and 70 percent respective rates when using underlying (2011) partisanship. Similar differences appear on the Republican side, but as mentioned earlier, the magnitude is smaller. The biggest difference is for the “Lean Republican category,” as while a 2016 measure suggest 90 percent of this group went Trump, a metric capturing original members of this category suggests 83 percent did.

In sum, this comparison does suggest some partisanship updating to accord with vote choice took place, but not to any large extent. Concerns with endogeneity should be tempered. That’s in large part because partisanship remains a very stable over time variable–at both the aggregate and individual level. To underscore this latter point, I used all three survey waves of the VSG (bringing in the 2012 wave that I’ve exclude up until now) to track individual level partisanship dynamics at three different points in time over a five-year span. That results in the following table, which shows the distribution of VSG survey respondents by the different possible party ID combinations they can have across the three survey waves. In each wave, they can express three different partisan affiliations, which makes for 27 unique combinations (3*3*3 = 27).


Two combination groups stand out: people who identified as Democrats in all three years (41.74 percent of all respondents) and people who identified as Republicans in all three years (33.79 percent). That means about three out of every four people (75.53 percent to be exact) are consistent partisans over the course of four years. The next most common group is people who do not reveal any partisan leanings–Independents–which makes up 6.07 percent of all respondents. Thus,  81.60 percent of all people have a consistent expression of partisanship (or lack thereof) across five years at three different points in time, a piece of evidence indicative of strongly stable partisanship.

Moreover, looking further down the table, only 4.64 percent of respondents ever identify with both parties at some point during the three survey waves. Rather, most of the party switching–which is very little to begin with–is into and out of the Independent category (this movement takes up 13.76 percent of all survey respondents). In light of these trends, the lack of substantial endogeneity–changes in partisanship driven by vote choice selection–should not come as much of a surprise.

Clarifying the Relationship between Partisanship and 2016 Vote Choice with Panel Data

Sample Composition Effects in Alabama Senate Election Polling

The controversy surrounding Roy Moore in the Alabama Senate election seemed primed to create a differential partisan nonresponse bias phenomenon often seen in past election polling. As negative news increases about a candidate in an election, members of the public who share that candidate’s party or support the candidate become less inclined to take polls about the election. Shifts in election polling could thus result from mirages related to these nonresponse patterns, and not actual changes in opinion.

I’ve shown a similar trend using crosstabs from public polling in the context of Donald Trump approval rating polls, finding a moderately strong relationship between the partisan composition of a sample and Trump approval. I thought it might be informative to do the same with the string of pre-election polling for the upcoming Alabama Senate election. First, among polls that make partisan composition data available, I graph the relationship between partisan composition (the difference in Republican and Democratic percentage of a poll’s sample) and Moore’s margin of support (the difference in Moore’s and Jones’s intended vote shares).


There’s a weak but present association between the two variables here. If the relationship was one-to-one, then partisan composition would completely shape the polling outcome and would thus suggest changes in Moore’s polling numbers are shaped entirely to how many Democrats and Republicans take a poll. That’s not the case here For every one point increase in net Republican margin, there’s a 0.36 point increase in Moore’s margin. It’s positive as expected–the more Republicans that take a poll, the better Moore fares–but not too strong. This may have more to do with other aspects by which polls conducted for this race differ, in which case a comparison using consecutive survey results from the same pollster would be a more accurate test. However, not enough polls exist to do that.

Comparing the level of Trump support in a poll with Roy Moore’s advantage over Doug Jones produces a slightly stronger relationship, as shown below:

alsen2_112617.pngIn this case, for every one point increase in Trump’s net approval, a 0.89 point increase in Moore’s margin over Jones results. Thus, the more people with favorable views toward Trump respond to Alabama Senate election polls, the better Moore appears positioned in the race. It’s worth noting that the variation in Trump net approval across polls–as small as +5 and as large as +22–likely also has to do with how different pollsters approach sampling and vary in methods. If I try to hold pollster “constant,” I only have two JMC Analytics polls to consider. As I’ve mentioned before, the lack of Trump approval change but presence of a Moore margin shift suggested the vote choice opinion shift was not artifactual but real opinion change.

Ideally, there would be more opportunities to look at within-pollster change like this. One comparison doesn’t preclude the possibility of nonresponse bias affecting polling in this race, but should indicate this bias isn’t as clear-cut and strong–even though the race’s dynamics and surrounding news would make it likely. At the same time, looking across polls does reveal patterns indicative of some stereotypical nonresponse bias effects. Polling will likely remain limited down the stretch of this race, but more polls to examine will always give a clearer picture of whether this bias is at play–especially from pollsters who have already polled the race earlier.

Sample Composition Effects in Alabama Senate Election Polling

Presidential-Gubernatorial Race Splits and Party Voting in 2016

While down-ballot ticket races such as Senate and House elections have become increasingly nationalized–closely correlating with state presidential vote–gubernatorial elections have not followed this path as much. As Harry Enten detailed using 2012 presidential vote and 2014 gubernatorial vote totals, several states went for presidential and gubernatorial candidates of different parties. Examples include Florida, Maryland, Massachusetts, and Wisconsin. In some cases, states chose the same party, but diverged significantly in vote share that got them to that point (e.g. Kansas).

A similar thing occurred in 2016. Among the 12 states that held a governor’s race, Democratic vote share in gubernatorial elections could explain just 29 percent of variation in Democratic vote share in the presidential race. The relationship between the two variables can be seen in the below plot. If all states fell on the 45-degree line, then their gubernatorial and presidential votes would match perfectly. Thus, the further each point (state) diverges from the line, the more unrelated these vote shares are.


Five of the 12 states elected governors from the opposite party of the president that won the state: Vermont, New Hampshire, North Carolina, Montana, and West Virginia. The curious split between gubernatorial and presidential voting at the state level therefore appears to have persisted in 2016. That prompts obvious questions about voting at the individual level: to what degree is cross-party voting occurring? Such a question has implications for the broader study of partisanship, as it appears that party affiliation exerts a different force in presidential and gubernatorial ballot decisions. As a result, it might give a clue about the degree to which voters rely on partisanship or other factors–such as those specific to candidate traits or state conditions–in casting their votes.

Using 2016 CCES data, I was interested in seeing the rates of party voting by election type–presidential or gubernatorial–in the 12 states that had both race types in 2016. I calculate “party voting” as the percent of Democrats who vote for a Democratic candidate and the Republicans who vote for a Republican candidate out of all partisans who expressed a vote choice when asked after the election took place. (Note: the results below do not use vote validation out of sample size concerns, but when using only verified voters, the results are very similar.) The darker blue tinted bars correspond to party voting in the presidential race, while the lighter tinted bars represent voting in each of the state’s gubernatorial races. 95 percent confidence intervals are included for each calculation. While non-overlapping bars do not indicate statistically significant differences, these intervals should give a sense of the accuracy of each percentage which is informative for understanding party voting rates.


Few large divergences between presidential and gubernatorial party voting rates appear. Across most of the states here, people vote their party to similar (high) degrees, whether it’s a governor’s or president’s race. However, a few trends are notable. In Montana, one of the states that went to different parties, the party voting rate was 10.5 percentage points higher in the presidential race than in the gubernatorial race. 17 percent of Montanan partisans voted for a candidate outside their own party, suggesting some split-ticketing voting taking place. That should help clarify why the Democratic gubernatorial candidate (Steve Bullock) did 14 percentage points better than the Democratic presidential candidate (Hillary Clinton) did, for example.

Similarly, a statistically significant difference occurs in New Hampshire in the party voting rates by race type. While 94.9 percent of partisans voted for their co-party candidate in the governor’s race, fewer in the presidential race did at 88.5 percent. This split likely made the pairing of a Democratic presidential win and Republican gubernatorial win possible. One other mixed result state sees this type of split–West Virginia, which had a 84.6 party voting rate in its presidential race but only a 70.2 rate in the gubernatorial rate. West Virginians voted their party at a higher rate on the presidential ballot than on the gubernatorial ballot, possibly paving the way for a Democrat wining the governor’s office and a Republican winning the state at the presidential level.

Interestingly, this comparison sheds little light in the case of Vermont, where a Democrat in Clinton won 61.1 percent of the presidential vote but a Republican in Phil Scott won 52.9 percent of the gubernatorial vote. Party voting does not diverge by much by race type. I looked to see whether behavior by non-partisans (pure Independents) could be driving the different results, but little difference by race type appears there as well. Limitations of the survey data used may be at play here, as the sample of Vermonters has more Democratic voters for the governor’s race than it should (a 50-45 Democratic advantage among those surveyed even though it should be closer to 53-44 Republican).

Regardless, differing party voting rates may have played a role in the divergent presidential/gubernatorial race outcomes in West Virginia, Montana, and New Hampshire. Not only does that offer an indication of when partisanship exerts less of a force on vote choice, but it also might bear meaning for when races become nationalized or not. As Dan Hopkins discusses in the description for his forthcoming book, these patterns–especially from the first figure–could indicate the level of nationalization of a race, when a down-ballot candidates become tied to their national party and presidential candidate, and even whether party voting activation corresponds to nationalization of an election.

Presidential-Gubernatorial Race Splits and Party Voting in 2016

State-Level Variation in Trump Approval Rating Decline

It’s no question that since the start of his presidency, Donald Trump has seen his approval rating drop considerably. Starting out at around a slightly net positive rating, the president’s approval numbers now stand at 39 percent approve and 56 disapprove according to HuffPost Pollster’s tracker. Heterogeneity in this approval decline will likely have some bearing on his and his party’s electoral fortunes. For example, if much of this approval rating drop concentrates among Americans already planning to vote against him, Trump stands to suffer less from these overall trends. If the decline occurs in key parts of his base or the country previously behind him in support, this starts to become more problematic.

As part of hundreds of thousands of survey interviews it has done during the Trump presidency, Morning Consult recently released state-level Trump approval rating data at two time points: in January at the start of Trump’s time in office, and more recently in September. I calculated Trump’s net approval (approve% – disapprove%) at each point to then create state-level net approval change. This ranged from a 31-point drop (in Illinois) to an 11-point drop (Louisiana), with an average decline of 19 points from January to September. I merged this data with 2016 state election vote shares for Trump and Hillary Clinton, from which I calculated Trump’s margin of victory. Using these two pieces of data–Trump election margin and Trump net approval change–I could see where the decline in Trump approval during the first year of his presidency has concentrated most. The below graph shows this, plotting Trump margin against net approval change, with each data point represented by the state’s abbreviation and color corresponding to whether Clinton (blue) or Trump (red) won the state:


As is visually clear, much of Trump’s approval rating decline has occurred in states where he has less underlying support. Average net decline (19 points) is large across the board, but that hides important variation: states with above average decline tend to be ones that Clinton won in 2016, while states that have below average decline (i.e. states that aren’t souring on Trump as quickly) are more likely to have been won by Trump in the election. The relationship is not overwhelmingly strong–the adjusted R-squared is 0.32, and if a regression line was plotted, there would be a lot of variation around that line along different x-values (see CA, MD, and HI in particular as large deviations). Nevertheless, the pattern remains evident, and it suggests Trump has seen his greatest approval rating losses in states where he already had low levels of support–and perhaps where he could most afford to lose support if anywhere. Large overall declines in Trump approval rating will still prove important for shaping future election results, of course, but not as much they would if they occurred evenly across states, for example.

I’ll briefly touch on one other interesting result. In critical “swing states,” which I’ll define here as those where the absolute margin between Clinton and Trump vote shares was less than five percentage points, a roughly even split emerges in terms of pace of approval decline: five of the 11 swing states have soured on Trump at a greater rate than the national average, while the other six have done so at a below average rate. Given the fundamental role these types of states play in deciding national elections, this more even Trump approval decline is important to take note of as well.

State-Level Variation in Trump Approval Rating Decline

Issue Positions and Identity in White Southern Partisan Realignment

The book Democracy for Realists is incredibly important for understanding the current American political environment, but as its authors Christopher Achen and Larry Bartels show, it also sheds light on key historical events. In one particularly informative example, Achen and Bartels apply their framework–the predominance of social identities and groups over issues and policy preferences for shaping political outcomes–to the question of what drove white partisan realignment in the South. Conventional wisdom holds that differences in opinion on racial policy issues underpinned Southern white flight from the Democratic Party. Achen and Bartels, however, demonstrate that the evolving partisan distribution of Southern whites did not differ much by opinion on key issues, such as support for or opposition to (1) enforced racial integration in schools or (2) government aid for blacks. Instead, Southern whites on either side of these issues moved just about equally away from the Democratic Party and to the Republican Party, leading Achen and Bartels to conclude that white Southern partisan realignment was not about policy issues. In further analysis, the authors show the partisan movement centered more on white Southern identity, proxied by feeling thermometer ratings of “Southerners,” as those strongest in this identity were most likely to have left the Democratic Party.

While not as specific policy preference questions as the ones Achen and Bartels used, there is some other interesting data in the ANES–not used by the authors–about general issue positions and perceptions speaking to racial conservatism. I wanted to check these, as well as the Southern feeling thermometer the authors used, as a way to further shed light on white Southern partisan realignment–and whether it varied more by issue positions or indicators of identity attachment.

For a couple years in the 1960s and 70s, the ANES asked respondents whether they favored desegregation, strict segregation, or something in between. Below, I plot how Democratic margin (Democratic % – Republican %) looked like by position on this issue among whites in the South. (Note: In all of the below plots, points correspond to sample size to give a sense of certainty of the estimates and serve as a reminder that these should be interpreted with caution as they’re not very precise.)


This is a short time frame, but if issue positions were driving partisan realignment, we would expect people who favored strict segregation/something in between to become less Democratic (i.e. drop further downward along the y-axis) at a faster rate than those who favored desegregation. At least in these early stages of realignment shown here, that’s not the case. There is movement (downward) away from the Democratic Party, but it doesn’t consistently occur in either of these issue position groups to a greater degree. Instead, those favoring the more racially liberal position of desegregation (the red line) trend Republican at faster rates in some of these years.

Another question with a longer times span is also informative. From the 1960s to the 90s, ANES respondents indicated whether they believed civil rights leaders pushed too fast, too slowly, or moved at the right speed. While not about a specific policy, the question does capture racial ideology to some extent–answers of “Too fast,” plotted in red in the below graph, mark the more conservative response.


As the graph shows, shifts away from the Democratic Party do not follow conservative or liberal positions on this issue. White Southerners who believed civil rights leaders pushed too fast and those who believed leaders pushed too slowly/at the right speed were about equally likely to leave the Democratic Party over time. Once again, this goes to show that key racial issues of the day did not shape partisanship change in the white South.

In conjunction with similar analysis by Achen and Bartels that show the same dynamic, the main takeaway here is that white Southern movement away from the Democratic Party and to the Republican Party does not appear to be associated with positions on racial issues. To argue in favor of an identity-driven partisan change story, Achen and Bartels focus on a feeling thermometer of “Southerners” (similar ratings are asked of other social groups too). While far from perfect, this measure should capture some semblance of Southern identity–what Achen and Bartels argue contributes most to the realignment. Like with the prior graphs, I wanted to check how white Southern partisan distribution varies by strength of this Southern identity proxy. I constructed a “Strong Southern Identity” measure (at the 75th percentile of the Southerners thermometer rating) and “Weak Southern Identity” measure from this ANES question, and plotted how the margin for Democratic partisan identification varied over time by these two identity strength levels. (Note: Different handling of this data–e.g. using the median or a rating of 50 as the cutoff for high or low identity strength–produce similar results.)


Although this thermometer rating isn’t asked in several years, a pattern becomes present: starting by the mid- to late-1970s, white Southerners with the strongest sense of Southern identity become more Republican over time than those with a weaker sense of this identity. Specifically, in 1976, those with strong Southern identities were 64 percent Democrat and 19 percent Republican. By 2008, they were 27 percent Democratic and 63 percent Republican. On the other hand, in 1976, those with weak Southern identities were 50 percent Democrat and 33 percent Republican. By 2008, they certainly changed their partisanship too, but not to the same degree, as they were 36 percent Democrat and 49 percent Republican. In sum, over this 32-year span, the partisanship of strong Southern identifiers changed a net 80 points in favor of the GOP–for weak Southern identifiers, it was less than half at just a 31-point swing.

Taking this graph and earlier ones together, these results further reinforce the notion–as established by Achen and Bartels–that identity, more so than racial conservatism or liberalism on issues, played a bigger role in the partisan realignment of white Southerners. The power of social identity relative to that of policy preferences for political behavior seems to dominate today’s political scene–perhaps this dynamic is a bigger part of American political history than commonly accepted as well.

Issue Positions and Identity in White Southern Partisan Realignment

A Quick Look at a Response Order Experiment Results

Does the order in which a survey respondent sees web question response options affect the response to the question? Such a question has often been probed in survey research, with tests typically finding a primacy effect. When taking surveys where they can see an entire set of questions (i.e. not phone surveys, for which recency effects come into play), respondents are biased towards selecting response options that appear earlier (see here for a review of past work on this). This satisficing behavior is problematic in that it breaks with the assumption of respondents considering an entire response option set when answering a question, and instead choosing earlier options and responses that are first reasonably acceptable to survey-takers. Notably, this could produce an inaccurate reflection of actual opinion if later, overlooked response options better capture the respondent’s opinion.

In a recent survey I conducted, I embedded a response order experiment to see whether such response order effects had been plaguing the student surveys I’ve been running. Specifically, on various questions about social/academic life perceptions and experiences during students’ freshman years, I assigned a random half of survey respondents to see a certain response order for a question, and the other half of survey-takers to see the reversed response order for that same question (note: “Not sure” options always appeared at the end of a set). I did this for 15 questions (one of which was actually a five-question grid), and checked to see if response percentages were statistically significantly different between the groups that saw different response orderings. In short, I did not find any significant response order effects. While there were differences in the expected direction (when responses appeared earlier in the set, they were chosen more often), none of these differences attained significance at the 0.05 level.

A few response order differences came close to significance though, which I want to briefly touch on. When students were asked how much they missed home during their freshman year, more chose “A little” when it appeared second in the response set (49 percent) than when it appeared third (37 percent). The small sample size for the experimental groups (roughly 180 weighted N in each) made for larger 95% confidence intervals and thus these groups are not significantly different here. Interestingly, the difference size does not appear as drastic for the other response options for this question.

home response order effect 9-3-17

When students were asked whether they have regrets about coming to Dartmouth, a noticeable–though not significant–divide appears by whether they see the response option of “Yes” or “No” first. When “Yes” appears first, 55 percent of students say they have regrets while 38 percent say they don’t have regrets. On the other hand, when “No” precedes “Yes” in the response set, 41 percent say they have regrets while 51 percent say they don’t have regrets. Again, conclusions from this are limited by sample size constraints, but it’s still notable that response tendencies swing in this manner, especially as it’s just a two-question response inversion (rather than four-question responses that are reversed in other cases).

regret response order effect 9-3-17

I would need a larger sample to confirm, but it does that seem that for some questions–in this case, those that trended more sensitive than other questions in the survey–response ordering matters. For aggregate results that are always the ones reported, randomly reversing the response order would alleviate some of these concerns. At the same time, it’s worth keeping in mind that this only occurred for two of 15 questions (and in truth, 19, given the grid questions), so it’s not as serious a problem–besides the fact that none of the response order effects were significant anyways.

A Quick Look at a Response Order Experiment Results

Social Exclusion and Demographic Determinants of Minority Group Partisanship


In a recent Journal of Politics article, Alexander Kuo, Neil Malhotra, and Cecilia Hyunjung Mo make a very interesting and novel contribution to our understanding of partisan identification. In what’s particularly relevant to non-white minority groups, the authors argue that experiences of social exclusion on the basis of one’s racial/ethnic group membership can influence political identity. People can interpret individual experiences of exclusion as group exclusion. When one party is considered more exclusionary, these experiences can define which party best represents group interests, motivating greater attachment to/detachment from certain parties. Kuo et al. cite past research to establish the prevailing view of the Democratic Party as the party most beneficial to ethnic minority groups and the less exclusionary one. As a result, feelings of social exclusion should translate into greater identification with and support for the Democratic Party.

Continue reading “Social Exclusion and Demographic Determinants of Minority Group Partisanship”

Social Exclusion and Demographic Determinants of Minority Group Partisanship