The book Democracy for Realists is incredibly important for understanding the current American political environment, but as its authors Christopher Achen and Larry Bartels show, it also sheds light on key historical events. In one particularly informative example, Achen and Bartels apply their framework–the predominance of social identities and groups over issues and policy preferences for shaping political outcomes–to the question of what drove white partisan realignment in the South. Conventional wisdom holds that differences in opinion on racial policy issues underpinned Southern white flight from the Democratic Party. Achen and Bartels, however, demonstrate that the evolving partisan distribution of Southern whites did not differ much by opinion on key issues, such as support for or opposition to (1) enforced racial integration in schools or (2) government aid for blacks. Instead, Southern whites on either side of these issues moved just about equally away from the Democratic Party and to the Republican Party, leading Achen and Bartels to conclude that white Southern partisan realignment was not about policy issues. In further analysis, the authors show the partisan movement centered more on white Southern identity, proxied by feeling thermometer ratings of “Southerners,” as those strongest in this identity were most likely to have left the Democratic Party.
Does the order in which a survey respondent sees web question response options affect the response to the question? Such a question has often been probed in survey research, with tests typically finding a primacy effect. When taking surveys where they can see an entire set of questions (i.e. not phone surveys, for which recency effects come into play), respondents are biased towards selecting response options that appear earlier (see here for a review of past work on this). This satisficing behavior is problematic in that it breaks with the assumption of respondents considering an entire response option set when answering a question, and instead choosing earlier options and responses that are first reasonably acceptable to survey-takers. Notably, this could produce an inaccurate reflection of actual opinion if later, overlooked response options better capture the respondent’s opinion.
In a recent Journal of Politics article, Alexander Kuo, Neil Malhotra, and Cecilia Hyunjung Mo make a very interesting and novel contribution to our understanding of partisan identification. In what’s particularly relevant to non-white minority groups, the authors argue that experiences of social exclusion on the basis of one’s racial/ethnic group membership can influence political identity. People can interpret individual experiences of exclusion as group exclusion. When one party is considered more exclusionary, these experiences can define which party best represents group interests, motivating greater attachment to/detachment from certain parties. Kuo et al. cite past research to establish the prevailing view of the Democratic Party as the party most beneficial to ethnic minority groups and the less exclusionary one. As a result, feelings of social exclusion should translate into greater identification with and support for the Democratic Party.
The lack of ideological constraint among the American public–possession of liberal stances on some issues and conservative stances on others–has been a defining feature of much political science research on mass belief systems. The modern day political climate makes it easy to overlook this reality that few people fall entirely on one side of the political divide. With that in mind, G. Elliott Morris (from The Crosstab) and I worked together to devise a Twitter bot to illustrate this lack of ideological constraint among Americans. Using data from the 2016 CCES, our program randomly selects an individual who took this nationally representative survey, randomly selects three of this individual’s expressed issue positions, and tweets out those positions along with the individual’s party and ideology. More details on this process can be found here on this blog post on The Crosstab, and the twitter handle itself can be found here.
As I’ve talked about in the past, taking into account differences in the mode of a survey–whether it’s conducted with a live caller, in-person, online, etc.–can sometimes be important for interpreting the survey’s results. One of the most prominent sources for detailed and historical political survey data, the American National Elections Study (ANES), incorporated interviews over the internet starting in 2012, complementing its long-running face-to-face/in-person component. As a high-quality, (relatively) high response rate survey, this offers a promising way to gauge differences in survey responses by the two prominent modal approaches: in-person/live interviewing vs. self-administered internet surveys.
A longstanding topic of interest, the voting behavior of working class white population–and socioeconomic divides in voting patterns more broadly–once again attracted considerable attention during and after the 2016 election. Some assessments that have historically contextualized the low SES white vote have showed that this group voted more Republican than it ever had in recorded history. These accounts often center on defining socioeconomic status in terms of college degree attainment. Older related analyses make distinctions between definitions of SES, and importantly demonstrate that SES divides play out differently across different areas of the country (distinguishing between voting pattern evolution in and outside the South, for example).
Past findings regarding differential partisan nonresponse–driven by positive or negative news surrounding each major party–has largely come in the context of election seasons. Recently, I’ve checked whether this phenomenon extends to other salient public opinion trends, such as presidential approval, outside a campaign season. While lacking optimal (raw panel) data, I did find evidence that would tentatively confirm the main thrust of this past work–that variation in public opinion depends on the partisan makeup in the same poll (indicative of varying partisan responsiveness to surveys). Examining this relationship among pollsters who do and do not account for partisan selection processes in their weighting methodology sheds particularly convincing light on this dynamic. This specific aspect (the split between type of pollster) could imply that while this relationship could also be affected by individual level party identification shifts and resulting changes in party compositions, at the very least some of it has to stem from differential partisan nonresponse trends.