Social Exclusion and Demographic Determinants of Minority Group Partisanship

Introduction

In a recent Journal of Politics article, Alexander Kuo, Neil Malhotra, and Cecilia Hyunjung Mo make a very interesting and novel contribution to our understanding of partisan identification. In what’s particularly relevant to non-white minority groups, the authors argue that experiences of social exclusion on the basis of one’s racial/ethnic group membership can influence political identity. People can interpret individual experiences of exclusion as group exclusion. When one party is considered more exclusionary, these experiences can define which party best represents group interests, motivating greater attachment to/detachment from certain parties. Kuo et al. cite past research to establish the prevailing view of the Democratic Party as the party most beneficial to ethnic minority groups and the less exclusionary one. As a result, feelings of social exclusion should translate into greater identification with and support for the Democratic Party.

Continue reading “Social Exclusion and Demographic Determinants of Minority Group Partisanship”

Social Exclusion and Demographic Determinants of Minority Group Partisanship

American Political Ideology, A Twitter Bot Approach (The Crosstab)

The lack of ideological constraint among the American public–possession of liberal stances on some issues and conservative stances on others–has been a defining feature of much political science research on mass belief systems. The modern day political climate makes it easy to overlook this reality that few people fall entirely on one side of the political divide. With that in mind, G. Elliott Morris (from The Crosstab) and I worked together to devise a Twitter bot to illustrate this lack of ideological constraint among Americans. Using data from the 2016 CCES, our program randomly selects an individual who took this nationally representative survey, randomly selects three of this individual’s expressed issue positions, and tweets out those positions along with the individual’s party and ideology. More details on this process can be found here on this blog post on The Crosstab, and the twitter handle itself can be found here.

Update: Check out an article over at Slate about the bot here.

American Political Ideology, A Twitter Bot Approach (The Crosstab)

Survey Mode Effects in ANES Partisanship Measurement

As I’ve talked about in the past, taking into account differences in the mode of a survey–whether it’s conducted with a live caller, in-person, online, etc.–can sometimes be important for interpreting the survey’s results. One of the most prominent sources for detailed and historical political survey data, the American National Elections Study (ANES), incorporated interviews over the internet starting in 2012, complementing its long-running face-to-face/in-person component. As a high-quality, (relatively) high response rate survey, this offers a promising way to gauge differences in survey responses by the two prominent modal approaches: in-person/live interviewing vs. self-administered internet surveys.

Partisanship is a variable central to behavioral political research, and given its near unmatched importance, I was curious in checking whether the 2012 and 2016 ANES distributions of partisanship varied by mode–in the face-to-face/CASI mode compared to the internet/web mode. I’ll save a closer look at pure Independents (i.e. those don’t lean towards either party) for later, but here’s how the partisanship distribution for the 2012 iteration looks like for the six non-pure Independent classifications broken up by mode:

pidmode4_080417

And here’s how that same distribution looks like in the 2016 version:

pidmode2_080217

There’s always going to be some sampling error associated with these survey percentages, so it’s fair to say that there does not exist too much of a difference by survey mode among the groups that initially self-describe themselves as partisans–the “Strong Democrat/Republican” and “Not very strong Democrat/Republican” ones. That conclusion holds for looking at both the 2012 and 2016 data.

However, for the partisan subgroups appearing on the right-most side of the graph, a survey mode effect does form. These Democratic-leaning and Republican-leaning Independents describe themselves as Independent upon first being asked about their party affiliation, but when pressed, reveal a direction in which they lean. In 2012, there were six percent more Democratic leaners and nine percent more Republican leaners in face-to-face interviews than in web interviews. The same difference materializes in 2016, where leaners for both parties appear in greater numbers in the FTF mode. Given the stigma that has increasingly been associated with partisan politics, identifying as a partisan can be seen as a socially undesirable response (more on this at the bottom of this post). This socially desirability bias has been understood to be stronger in live in-person interviews than in self-administered interviews without another human involved. Because the ANES FTF mode constitutes an in-person mode, it thus makes sense that closet partisans–the leaners–appear in larger numbers in that mode than in the self-administered online surveys.

Breaking up partisanship responses by each category can be informative as shown above. An alternative approach involves looking at partisanship in a three-point scale–with strong, weak, and leaner partisans for each party grouped into two separate categories (Democrats and Republicans) and pure Independents separated out in their own group. This simpler distribution depicts an equally if not more important landscape of partisanship among Americans. Below is this distribution in the 2012 ANES broken up by mode again:

pidmode3_080417.png

And here’s the distribution in 2016 by survey mode:

pidmode1_080217.png

Within each year, Democratic and Republican identification is roughly similar across FTF and internet modes of interview, though it’s worth noting that the FTF survey holds six percent more Republicans than the web survey does in 2016. The most notable and consistent modal difference occurs for the pure Independent classification (shown in dark grey in the graphs above). In 2012, Independents occupy 16 percent of the web mode distribution, but just 10 percent of the FTF mode. The same exact modal difference for this group appears in the 2016 data as well, as there are six percent more pure Independents in web ANES surveys than in in-person ANES surveys.

So what explains this modal difference for partisanship? It appears as though the answer lies in a difference in how the follow-up question is posed to respondents who first describe themselves as Independents. For people who give this option when first asked about their partisanship (as well as those with answers of “No Preference,” “Other Party,” or “Don’t Know”), they then get asked whether they’re closer to one of the major parties. As seen in the questionnaire wording below for this follow-up part of the partisanship question, while people see “Neither” as an option if they’re taking the survey online, they are not given this option during FTF interviews, where they can only voluntary offer such a response (indicated by the “{VOL}” text).

temp.PNG

It thus looks like more people select “Neither”–which constitutes the pure Independent response of interest here–when seeing it listed in an internet survey than just voluntary responding with “Neither” in FTF interviews. This represents a survey mode effect for pure Independent identification, and by extension suggests that one’s understanding of partisanship from surveys can vary (slightly) depending on what survey mode the results come from.

Making Independent identification constant across mode presents challenges that might not be solvable. It does seem that the approach taken by the ANES FTF carries one flaw. If initial Independents are not offered a “Neither” option during a FTF interview, this group would feel more pressure to express a partisan lean. Given that Independents possess relatively less political knowledge/sophistication/interest which consequently makes them more susceptible to survey design effects, survey pressures such as these could be problematic in forcing an unreliable expression of partisan inclination. On this basis, I would argue that a “Neither” option on a partisanship follow-up question should always be offered–whether verbally for live interviews or visually for online ones.

This is of course is speculation for a research question that is testable. Moreover, for the ANES, changing this aspect of their FTF interview style could undermine their time series goals in making survey responses–e.g. on partisanship–comparable across time (i.e. not offering this response option in past years but offering it in future years creates a problem). At the same time, if this was so serious a concern, they wouldn’t have introduced the internet mode in 2012. Needless to say, adjudicating these considerations is complicated, but it’s still very important given that a variable so central to understanding American politics such as partisanship is the subject of this discussion.

Survey Mode Effects in ANES Partisanship Measurement