Does Approach to Coding Party ID Produce Different Over Time Pictures of Partisanship Stability?

Do different approaches for constructing partisanship distribution out of the traditional 7-point party ID scale on a survey result in different pictures of over time partisanship stability? That’s a small question I had after reading a recent Pew Research report on weighting approaches for non-probability opt-in samples. The analysis involved considering weights for political variables, the most important being party identification. Pew used a partisanship benchmark built from a certain approach to treating a party ID variable: coding Independent leaners (those who say they are Independents when first asked but admit that they lean toward a party upon a follow-up question) as Independents, and not as members of a party to which they lean. Usually, this decision is problematic, as these leaners overwhelmingly resemble regular partisans in terms of voting proclivities, ideological self-identification, and issue positions, as I discuss in a past blog post. Given this evidence, I was curious in the decision Pew made in constructing a party ID weighting benchmark that treated leaners as Independents.

Additionally, I wondered if their caution about partisanship weighting due to over time change in partisanship distribution (see page 27) might be shaped by their treatment of Independent leaners. For example, their own data shows–specifically as of late–that an approach of grouping leaners with their parties produces a more stable over time portrait of partisanship than an approach of leaving leaners ungrouped and as Independents. To shed light on this question, I turned to three major surveys that provide over time measurement of the public’s partisanship: the American National Election Studies (ANES), the General Social Survey (GSS), and the Cooperative Congressional Election Study (CCES). Though not all of them extend back as far as the ANES does, for example, trends by survey should be informative. Most importantly, in the below graph, I calculate over time partisanship distribution for each survey year cross-section by survey source and approach to handling Independent leaners–grouped (with parties) or ungrouped (left as Independents). I also compute the standard deviation in over time partisanship measurements by survey source and leaner coding approach, which I interpret as an indicator of variability. (One note: because each survey encompasses varying amount of years, comparisons of SD’s should be made between leaner coding approaches within surveys, not across surveys in any way.)

ptybench_013118

Data from the ANES provides evidence in favor of the suspicion I had–that coding leaners as Independents inflates the over time variability in partisanship that Pew worries about. While the grouped leaner approach results in a 3.45 SD, the ungrouped leaner approach results in a 5.03 SD. In other words, this approach produces an over time portrait of partisanship that has much more variation than the grouped leaner approach. An implication here could be that if researchers want to weight on party ID but are worried about its variable nature, using the grouped leaner approach is safer.

However, evidence from the GSS and CCES surveys offer evidence in the opposite direction. In the GSS case, the SD is larger for the grouped leaner approach (4.49) than for the ungrouped leaner approach (4.26). In other words, GSS time series data suggests coding leaners as Independents results in a more stable picture of over time partisanship. Likewise, the CCES data would imply the same conclusion, as the grouped leaner approach is more variable (SD = 2.80) than the ungrouped leaner approach (SD = 2.54)

In sum, I cannot really draw concrete conclusions about the best approach to constructing a party identification benchmark on the grounds of choosing how to code Independent leaners. It’s worth noting that the difference in variability–as measured by the standard deviation–is largest in the ANES case, which shows the grouped leaner approach offers the most stable partisanship metric. Still, while not to the same degree in the other direction, evidence from the GSS and CCES support the opposite takeaway. At the very least, though, I can conclude that there does not appear to be a difference in over time partisanship stability that results from different coding decisions. Using a party ID benchmark wherein leaners are ungrouped does not exaggerate over time partisanship variability as I thought it might have–at least not in a consistent manner. This of course is a very simple analysis, but it seems like leaner coding is not too much of a problem for partisanship benchmark construction. At the same, it’s worth keeping in mind that in almost all other cases, researchers are better off sticking to the grouped leaner approach.

Does Approach to Coding Party ID Produce Different Over Time Pictures of Partisanship Stability?