Past findings regarding differential partisan nonresponse–driven by positive or negative news surrounding each major party–has largely come in the context of election seasons. Recently, I’ve checked whether this phenomenon extends to other salient public opinion trends, such as presidential approval, outside a campaign season. While lacking optimal (raw panel) data, I did find evidence that would tentatively confirm the main thrust of this past work–that variation in public opinion depends on the partisan makeup in the same poll (indicative of varying partisan responsiveness to surveys). Examining this relationship among pollsters who do and do not account for partisan selection processes in their weighting methodology sheds particularly convincing light on this dynamic. This specific aspect (the split between type of pollster) could imply that while this relationship could also be affected by individual level party identification shifts and resulting changes in party compositions, at the very least some of it has to stem from differential partisan nonresponse trends.
To test how robust of a finding this was, I wanted to check some alternative approaches for gauging the link between swings in partisan survey response and swings in the same poll outcome measures over time. One interesting alternative is to examine changes in partisan makeup and changes in outcomes of interest in consecutive polls from the same pollster. This builds off analysis of publicly available polls done by Gelman et al. (2016: 107) early in their paper regarding the same concept (see Figure 1(b)). Such an approach can prove helpful because it can better capture how movement in partisan makeup impacts movement in Trump approval. Comparing polls from the same pollster also could make for a better comparison (i.e. not needing to worry as much about how pollsters differ along other methodologies or approaches of theirs). This approach produced two new measures that I plot in the graph further below. Here are some details about these new variables (for any one poll conducted at time t).
- Change in Net Republican = (Unweighted Republican %t – Unweighted Democrat %t) – (Unweighted Republican %t-1 – Unweighted Democrat %t-1)
- Change in Net Trump Approval = (Trump Approve %t – Trump Disapprove %t) – (Trump Approve %t-1 – Trump Disapprove %t-1)
As with prior analyses, most of my data comes from HuffPost Pollster’s dataset of polls conducted from the start of Trump’s presidency through July 18th. Unlike before, I also searched for sample partisan composition–for polls that excluded this data in the available database–and manually recorded some of this excluded data from pollsters’ topline and crosstab pages. As a result, this made for a more comprehensive set of Trump approval data.
The below graph plots the relationship between the two aforementioned variables, distinguishing between pollsters who do and do not adjust for their sample’s partisan makeup (through party identification or past vote weights):
As the right-hand plot shows, a positive relationship emerges between consecutive poll shifts in partisan makeup and shifts in net Trump approval. Notably, the left-hand side contrasts with this result. Among individual pollsters who make some partisan adjustment, poll-to-poll changes in Trump evaluation they produce are largely uncorrelated with corresponding changes in their unweighted balance of partisans. It’s worth noting that the relationship’s strength among polls that don’t make any of these additional weighting adjustments is weaker than in my previous analysis (that did not track consecutive poll change). With that in mind, it still remains notable that 1) a positive relationship exists and 2) a clear difference in pollster type results. This alternative look provides additional evidence for the claim that swings in Trump approval are in part a byproduct of changes in partisan composition of polls–the latter of which speaks to a possible phenomenon of differential partisan nonresponse.
Below, I also show this same relationship between consecutive polls but among individual pollsters (limited to pollsters with more than three approval rating polls as of July 18th). Regression lines for the relationship between the two variables by pollster are once again plotted in each grid. Triangular points denote pollsters that adjust for their sample’s partisan makeup, while circular points represent pollsters that do not.
Small samples of polls abound in this graph, and so this should serve more as a qualitative look at a finer level (as well as to check whether certain pollsters disproportionately account for the relationship before). In a general sense, results from this graph confirm previous observations: unlike for pollsters that weight by party or past vote, pollsters that do not make adjustments for partisan sample makeup see their main outcome of interest–Trump approval–swing partly in conjunction with changes in how many partisans select into taking their polls. This is evidenced by the many positive relationships among different pollsters that don’t add special weights (circular dots) in light of the three pollsters–YouGov, ICITIZEN, and IBD/TIPP–that do make adjustments and see little relationship. From a qualitative look among survey houses with several polls during the Trump presidency, it seems as though Trump approval numbers produced by Politico/Morning Consult and even SurveyMonkey are most shaped by the partisan distributions of their samples.
The main takeaway here is a similar one from some past posts of mine. There are several ways to look at this phenomenon, but it does seem that differential partisan nonresponse plagues many pollsters in a way that suggests swings in Trump approval are partly sample artifacts, while pollsters who make adjustments for the partisan makeup of their polls tend to avoid such problems.