Gauging the Level of Over Time Issue Polarization among Partisans

The question of the extent and nature of polarization in the American public has received much attention from political science. Though not always debated on this front, polarization on issue and policy opinion is a nuanced subject and sometimes suffers from limitations from survey data. For example, survey questions on policy issues often offer binary responses, which can measure what side individuals fall on but not the extremity of their positions. Furthermore, most data sources on mass issue opinion lack the ability to give a historical account of polarization, as they simply have not been asking questions long enough. The American National Election Studies does not suffer from these pitfalls as much, however, having asked many of the same issue and ideological questions across several decades and doing so with survey scales that capture position extremity. As a result, analyzing the ANES can produce a uniquely informative picture of over time polarization on issue/policy opinion–namely, speaking to how divided Democrats and Republicans have become in their opinion across various issue domains.

That’s what the below graph illustrates–the average position of both major partisan group for nine issue/ideological time series questions asked in each ANES survey over the past few decades. Some of these questions fall on four- or seven-point scales in their original form, so to make them comparable, I code each to run on a 0 to 1 scale where 0 marks the most liberal position on the question and 1 represents the most conservative position.

polarization070118

From a qualitative perspective, these average partisan positions on several issues suggest not much ideological polarization has developed over the last few decades. Partisan gaps certainly stay constant throughout the years on every issue and some issues do see growing differences (e.g. abortion rights opinion comes closest to movement to the ideological “poles”). But for the most part, Democrats and Republicans appear to (1) not be substantially divided on the issues and (2) their divisions do not appear to be growing too heavily. Instead, year-to-year changes in opinion across both partisan groups seemingly track one another, offering support for a “parallel updating” account. For example, over time correlation coefficients between each partisan group’s position is mostly positive (for seven of nine positions) and moderate to strong in size (0.10, 0.26, 0.29, 0.30, 0.48, 0.90, 0.91 for the seven).

On the other hand, plotting the yearly differences between Democrats and Republicans on each position from above makes it makes it clear that for most areas, partisans are moving further away from each other on the issues over time. The below graph depicts these gaps and corroborates this trend of growing divisions.

polarization_2_070118

This graph puts a “magnifying glass” to the differences, so it’s worth keeping in mind the y-axis scale being used and how that accentuates these gaps. The growing differences are still not too sizable by any means–and not to the extent of considerable polarization in issue opinion that many often assume defines American politics. Nevertheless, divisions on the issues indeed have grown over the last few decades and up through the last election year, occurring most acutely for abortion rights, government aid to blacks, and self-described ideology.

Many might interpret these growing opinion divisions among the partisan masses as a response to growing polarization among party elites. In this sense, we might expect partisans who are most politically aware and attentive to elite discourse to receive elite cues about changing issue positions the most and change their opinion in ideologically consistent directions to a greater degree. The below graph plots the same average partisan position difference from above in green and the difference for only the most politically knowledgeable individuals in orange. Correct answers to a question about what party has a House majority–which, importantly, is a political knowledge question common to all the survey years of interest–distinguishes high knowledge respondents.

polarization_3_070118

Comparing average partisan distance for all respondents versus just high knowledge ones confirms the expected pattern: high knowledge partisans–those likely most receptive to elite cues on policy issues–lead the way on driving ideological polarization. The orange line representing them for the most part lies above the green line (all people) across most years and issues, demonstrating greater issue opinion difference exists (which the y-axis measures) for this group. While there isn’t a good comparison for this dynamic, it is interesting that these differences between high knowledge and all individuals are not that large–absent for some issues and muted in others.

Advertisements
Gauging the Level of Over Time Issue Polarization among Partisans

Comparing Racial Attitude Change Before vs. Including the 2016 Election

As I’ve documented before, sizable racial attitude change has occurred among Democrats in the liberal direction over the last 5-10 years. An unsettled aspect of this development, though, is how much the 2016 election (elites, campaign messages, etc.) drove the racial liberalization as opposed to preceding forces (e.g., activism around racial issues). A quick review of some racial attitude survey data here can produce a rough answer to this question.

Panel survey data can isolate individual level change (actual changing of minds) on racial attitudes. The 2010-2014 Cooperative Congressional Election Study panel and 2011-16 Voter Study Group panel have two common racial resentment (RR) items that can shed light on attitude change during a span of years that does include the last election and thus can capture factors from the 2016 election environment (2011-2016) and another span of years that does not (2010-2014). The obvious caveat in comparing individual level racial attitude change across these two datasets is that these are two different surveys with different people, so results should be treated as suggestive (though the data does come from the same vendor, YouGov, offering some reassurance).

Two graphs below visualize individual level change from 2010-2014 and from 2011-2016. Similar to what I’ve done before, the first graph (Figure 1) breaks down 2014 RR responses by 2010 responses, and the second graph (Figure 2) breaks down 2016 RR responses by 2011 responses. The key portion of the graphs to pay attention to is the percentage of original non-racial liberals (most importantly, those taking a conservative position) changing to a liberal opinion on each item and how this varies by time span.

demsrr061918_1.png
Figure 1

On the “overcome” RR item (left-hand side) in Figure 1, 12 percent of original racial conservatives change to the liberal position between 2010 and 2014. In terms of the survey responses, this represents a change from agreeing that blacks should overcome prejudice without special favors (a racially conservative stance) to disagreeing with this sentiment (a liberal stance). Importantly, this racial position switch outweighs the mirror opposite as only six percent of original racial liberals change to a conservative position (change in the liberal direction does not get cancelled out by opposite movement). However, as the below Figure 2 graph shows for the same item (left-hand side), a greater amount of original conservatives (21 percent) switch to the liberal side on this RR item 2011 to 2016–a time span that includes any influence from the 2016 election. Greater movement in liberal direction for those originally indifferent (Neither/DK) for 2011-16 compared to 2010-14 occurs too.

demsrr061918_2
Figure 2

Similar results appear for the “slavery” RR item shown on the right-hand side of Figures 1 and 2. On this question, 20 percent of original racial conservatives–disagreeing that generations of slavery/discrimination have made it difficult for blacks to work their way up–became racially liberal (agreeing with the statement) four years later in 2014. Once again, while greater than the opposite movement (only seven percent of racial liberals became conservative), the trend towards more racially liberal attitudes is greater for the time span that includes 2016. 30 percent of original racial conservatives (in 2011) adopt racially liberal attitudes on this RR item five years later, a greater percentage than the one seen for the 2010-14 change.

In sum, the data here shows that some individual level racial attitude change was already developing prior to the 2016 election. It’s also worth clarifying that for all of these comparisons of percentages, the 2011-16 span essentially covers the 2010-14 span and thus this former time span picks up most of any pre-2016 election racial attitude change. Given these time frames and the fact that these are different surveys, it remains difficult to clearly discern when the change occurs most. Nevertheless, a time frame that includes 2016–and thus likely any influence from the last election–clearly contributes at the very least some amount to the racial attitude change seen within the last decade.

Comparing Racial Attitude Change Before vs. Including the 2016 Election

The Effects of Survey Topic Salience on Response Rate and Opinions: Evidence from a Student Survey Experiment

As part of a recent survey of Dartmouth students, I implemented a survey topic experiment to determine how revealing the topic of the survey when soliciting responses affects the 1) response rate and 2) responses themselves. For background, in order to gather responses for these student surveys, I send out email invitations with a survey link to the entire student body. Partly inspired by past research demonstrating that interest in a survey’s topic increases participation rate for the survey, I created two conditions that varied whether the topic of the survey was made salient in the email message (i.e., in the email header and body) or not. This resulted in what I call a “topic” email sendout and a “generic” email sendout, respectively, to which 4,441 student email addresses were randomly assigned (N = 2,221 for generic, N = 2,220 for topic).

The below table shows the contents for each experimental condition:

treatments

Because the survey I was fielding focused on politics and social attitudes on campus, the topic treatment email–on the right-hand side–explicitly revealed that the survey was about politics (both in the header and body). The generic treatment on the left simply described the survey as one from “The Dartmouth” (the name for the student newspaper for which the survey was being fielded) that implied general questions would be asked of students. Much like in other related research, this made for a fairly subtle but realistic manipulation in the introduction of the survey to the student population.

Given this subtle difference, it might come as no surprise that small differences resulted for the outcomes of interest (response rate and opinions on specific survey questions). However, both surprising and expected effects did arise, suggesting that revealing a survey topic–in this case, the political nature of it–does make for a slightly different set of results and could lead to some nonresponse bias. These results are of course specific to the Dartmouth student body, but may have some bearing for surveys of younger populations more broadly.

Students received two rounds of survey invitation emails–first on a Monday night, then another email on the following Thursday night. After one email sendout, as the below Table 1 shows, students in the topic email condition (8.9 percent response rate) were significantly (p=0.04) less likely to respond to the survey (by 1.7 percentage points) compared to the students in the generic email condition (7.2% RR).

experiment results 1
Table 1

Knowing a survey is about politics made students less likely to take it. Speculatively, perhaps this politics survey request–which entails discussing politics and expressing oneself politically–acts as a deterrent in light of how controversy and rancor often become associated with both college campus and national political scenes. In other words, politics could be a “turn-off” for students in deciding whether to take a survey. However, after receiving one more email request to take the survey, students in both conditions start responding more similarly (note: those who originally took the survey could not take it again). Although the topic email treatment still leads to lower response rate, the size of the response rate difference shrinks (from 1.7 to 0.9 points) and the statistical significance of the difference goes away (p=0.34).

The bottom half of Table 1 also shows how distributions for key demographics and political characteristics. Women are five points less likely to take a survey they know is about politics compared to a perceived generic survey–perhaps in line with a view of politics as conflictual and thus spurring aversion as I’ve discussed before–but this doesn’t reach statistical significance. Little difference emerges by race.

Most interestingly, the survey topic treatment causes different pictures of the partisanship distribution. When survey responses are solicited using a generic email invitation, Democrats make up 71.9 percent of the student body; that drops by more than seven points for the politics topic treatment, as Democrats are at 64.3 here (the difference reaches marginal statistical significance). On the other hand, it appears that Republican students select into taking a politics survey at higher rate: the generic email condition results in 14.5 percent Republicans while the topic email condition results in 23.1 Republicans (difference significant at p=0.01). Republicans thus are more inclined to take surveys when they know it’s about politics, while for Democrats they become less inclined to do so. At least in the Republican case–which is the stronger result–one reason for this may be because a political survey affords them an opportunity for political expression in a campus environment where they’re typically outnumbered 3 to 1 by Democrats and therefore might be less open about their politics. Whatever the mechanism is, this result is not totally unexpected: the two highest Republican percentages that I’ve found in surveys of Dartmouth students have come from surveys where email invitations revealed the survey as a political one.

A few notable differences by experimental condition for substantive survey items materialized as well. A battery of questions (shown in the upper fourth of Table 2) probed how knowing that another student had opposing political views affected social a range of social relations. No consistent differences (and none reaching statistical significance) resulted for these questions.

On the question of whether someone ever lost a friend at the school because of political disagreements, however, more students indicated this was the case in the topic email: 17.2 percent did so compared to 10.9 percent in the generic email treatment, a difference significant at p=0.04. Raising the salience of the survey topic (politics) to potential respondents thus leads to higher reports of politics factoring into students’ lives in a substantial way such as this one.

experiment results 2
Table 2

This latter finding is not the only piece of evidence suggesting that the politics email treatment more strongly attracts students for whom politics plays a big role in their lives. Many fewer students report politics rarely/never being brought up in classes for the generic email condition (24.1 percent) than in the topic email condition (13.9), a statistically significant decline. This smaller role of politics in personal lives for the generic email invitation is also evident when asking about how often politics are brought up when talking with friends and in campus clubs/organizations.

Lastly, a question asked whether the political identification of a professor would affect a student’s likelihood of taking the professor’s class. Greater indifference to professor ideology emerged for the generic email and specifically for the two non-mainstream ideologies (libertarianism and socialism); students who took the survey in the topic email condition indicated that non-mainstream professor ideology influenced their course election to a greater extent.

In sum, many of the data points in Table 2 suggest that a survey email invitation raising the salience of the survey topic (i.e., politics) results in a sample for whom politics assumes a greater role in personal life. This intuitive and expected nonresponse bias–although secondary to the more important response rate and partisanship distribution findings–is still worth noting and demonstrating statistical support for.

The Effects of Survey Topic Salience on Response Rate and Opinions: Evidence from a Student Survey Experiment

Over-Time State and National Bias in CCES Vote Validated Turnout Rates

Inspired in part by a Grimmer et al. (2018) research note that touches on CCES turnout data usage, I calculated turnout bias at the state level in every election that the CCES covers (from 2006 to 2016). I measure bias as the difference between the vote validated state-level turnout from the CCES (survey turnout) and the voter eligible population (VEP) for highest office turnout taken from the United States Election Project (actual turnout). Positive values indicate survey overestimates of turnout, while negative values indicate underestimates. I break up the state-level measures of bias by region to make the visualization clearer, and at national level bias appears at the end.

Results here generally shed light on the reliability of state-level turnout measures generated from the CCES, especially in the context of the research that Grimmer et al. discuss (over time turnout comparisons across state). The data here also reflects the quality of state-level voter files and the ability of the CCES to match its respondents to each state’s voter file. Aside from a few cases, certain states are generally not consistently less biased than others over the last six elections. Bias across states is also pretty volatile and changes a good amount from election year to election year. At first glance, there does not appear to be a pattern to cross-state and across time turnout bias in the CCES.

ccesturnoutbias_Midwest_050518

ccesturnoutbias_West_050518.png

ccesturnoutbias_Northeast_050518

ccesturnoutbias_South_050518

ccesturnoutbias_natl_050618

Over-Time State and National Bias in CCES Vote Validated Turnout Rates

2016 Elite Cues and Public Misperception about Crime

The overestimation of crime rates is one of the most enduring and prevalent misperceptions in the U.S. Despite evidence clearly pointing to declines in various measures of crime in essentially every year over the last few decades, majorities of Americans during this same time span have consistently said crime has been increasing over the previous year. Just as curious is the lack of a role for partisanship and partisan motivated reasoning in understanding the nature of support for this misperception. For the most part, similar numbers of Democrats and Republicans have held this misperception of a rising crime rate over time, in contrast to perceptions of other national conditions like the economy.

The 2016 election introduced an important wrinkle into this topic, as Donald Trump all but endorsed this misperception and campaign on concerns over rising crime. His rhetoric on crime arguably represented the strongest elite message communicated to the public concerning this misperception. A shift like this where one party/elite supports one side of an issue while the other (Democrats/Clinton) do not express this message seems ripe to produce a “polarization effect” that John Zaller describes in Chapter 6 of his book on elite cues and public opinion, The Nature and Origins of Mass Opinion. Specifically, with the introduction of two-sided elite messages on this issue, we should expect for partisan opinion to become more polarized and especially among the most politically aware (i.e., those most likely to receive elite messages such as through the media). Data from Pew in the weeks leading up to the 2016 election–shown in the plot below–offers strong support for this pattern.

crimemisperceptionelitecues050118

Those most likely to receive elite cues–as measured by how closely they say they’ve been following news about candidates in the election (shown on the x-axis)–do indeed prove most polarized on where their perceptions of crime fall. The most attentive Democrats endorse the misperception the least, while the most attentive Republicans endorse the misperception the most. It’s worth noting that this same dynamic doesn’t always appear around election time–similar questions about crime perception did not see the same pattern of support by party and news attentiveness in the 2000 and 2008 elections:

crimemisperceptionelitecues00_050118

crimemisperceptionelitecues08_050118

It’s thus clear that the Zaller-type polarization effect is specific to 2016. Furthermore, the 2016 election case shows that elite cues prove potent enough to polarize perceptions of factual conditions–not just opinion on policy issues like Zaller focuses on in his work. Related to the original question of interest, evidence like this shows that the enduring crime misperception can fall subject to elite and partisan manipulation. This of course emerged from the thick of the 2016 election and a especially strong elite message environment. Later data from Gallup in 2017–though worded differently and coming from a different organization–shows that partisan homogeneity in the crime misperception has returned and reflects the same level of belief in rising crime that was recorded in years before the 2016 election. Perhaps elite rhetoric on crime subsiding after the election weakened the initial polarization effect seen during the campaign, with crime misperceptions proving resilient and entrenched after getting interrupted by a temporary response to elite messages.

Update 5/29/18:

Below is more comprehensive data on this topic and some context for the unexpectedly small role of partisanship in crime perceptions. The first graph shows crime perceptions by party while the second one shows economic perceptions by party. 2016 did indeed usher in larger partisan fissures in crime (mis)perception, but that quickly shrunk the following year.

crimepercbyparty2_051218

In addition, compared to perceptions of other national conditions–those about the economy–crime perceptions are much less divided by partisanship and by the party that holds that presidency (which serves as an indication of partisan motivated reasoning).

econpercbyparty_052518

Across all years, 38 percent of Democrats on average say economic conditions are getting worse when their party is in power while 66 percent say so when out of power (28 point difference). For Republicans, on average, it’s 61 percent when in power and 34 percent when out of power (27 point difference). For crime perceptions, the differences prove much more muted, but especially for Democrats. Across all years that a Republican president is in power, an average of 55 percent of Republicans say crime is rising. When out of power, Republicans say this 20 percentage points more (75 percent). On the other hand, 64 percent of Democrats say crime is rising compared to the previous year when they are out of power while 62 percent do so when they are in power. Putting aside the interesting partisan asymmetry here, the magnitude of the differences is particularly notable: on average across all years, partisans are separated by 29 percentage points in economic perceptions, but for crime, the average absolute difference is 13 points.

 

2016 Elite Cues and Public Misperception about Crime

Republican Trump Support and Survey Panel Attrition

In my last post, I used panel data to conclude that whether you evaluate Donald Trump’s intra-party approval among 2011 Republicans or among 2016/17 Republicans, it remains the same–roughly four out of five Republicans approve of their president. As a result, I rule out any serious concerns with endogeneity of partisanship to approval, which would emerge if original partisans disapprove of Trump at much higher rates. A caveat to the analysis I mentioned is one common to all panel survey analysis: the possibility of nonrandom attrition from the panel. An example could take the form of Republicans who dislike Trump dropping out of the panel (i.e., participating in earlier waves of the panel but not responding to later waves) at a higher rate than their panelist counterparts. That dynamic is certainly plausible. Taking a survey is a political act and means taking time to express one’s opinions on current day politics. Republicans who dislike Trump likely feel some discomfort with current politics and some dissonance–disliking a president from their own party–and thus might avoid expressing themselves politically (i.e., taking a political survey). That distaste with politics may have heightened during the course of 2017 and the several controversies surrounding Trump throughout the year, perhaps after being initially comfortable with discussing their politics (taking a survey) back in 2016.

The same Voter Study Group panel data I used before provides an opportunity to test this idea. While the December 2011 and 2016 waves of the panel survey each includes the same 8,000 Americans who responded to the survey, the July 2017 wave only contains 5,000 of those 8,000 respondents. 3,000 people thus dropped out of the survey. If Republicans who dislike Trump were more likely to leave the panel going from 2016 to 2017 than Republicans who liked Trump, then 2017 intra-party Trump approval might be artificially high. More broadly, evidence of this dynamic could offer insight into another aspect of panel survey attrition–perhaps those experiencing cognitive dissonance as it relates to contemporary politics are especially prone to dropping out.

Focusing on Republicans in the 2016 wave (N = 3,144),  I use OLS and logistic regression models to predict a binary outcome: whether an individual took the 2017 survey (a value of 0) or “dropped out” from 2016 to 2017 (a value of 1). (Note that they could still “return” and take future surveys as they did not actually leave the panel itself, I am just using the term “dropout” for shorthand here). Thus, I am predicting panel dropout (relative to continued participation) as the dependent variable. My predictor of interest is “Trump dislike,” captured by a four-point Trump favorability rating (reverse coded so higher values correspond to a more unfavorable opinion) asked in the December 2016 wave of the panel. Motivated by past work on survey panel attrition, I include several control variables in the modeling: gender, education (high school or less, some college, B.A. plus), race (white, black, Hispanic, other), age, four-point political interest scale (higher is more interest), and a partisanship stability variable. I use reported seven-point partisanship from 2011 and 2016 on each individual, and take the absolute value of the difference for this stability variable. Here’s the formula:

  • stability = |partyID2011 – partyID2016|

This variable ranges from a value of 0 (perfectly stable party identification from 2011 to 2016) to a value of 6 (switching from Strong Democrat to Strong Republican or vice versa), with a mean of 0.61. Those with the most stable partisan identities should be expected to stay in the panel at highest rates; if this happens to correlate with Trump dislike, it’s further important to include in the modeling.

Below I regress the panel dropout indicator on Trump dislike (Model 1) and then add in the set of control variables (Model 2). Positive coefficients indicate greater likelihood to dropout from the 2016 wave to the 2017 wave.

paneldropout.PNG

The statistically significant and positive effect of the Trump dislike variable offers evidence in favor of the nonrandom attrition story motivating this analysis: Republicans less favorable toward Trump were more likely to drop out of the panel. The effect is not overwhelmingly large, and does get cut in half after controlling for other variables, but still remains there. This is important as it shows that the 2017 wave is excluding some Republicans that dislike Trump; if they remained in the panel, Trump’s approval among Republicans would likely be lower in 2017 than what’s actually observed. Attrition thus makes Trump’s Republican approval appear stronger than is really the case (though again, the relationship is not so strong so as to substantially inflate his approval).

To better visualize this main result, I plot the predicted probabilities that a Republican VSG panelist drops out of the survey from 2016 to 2017 as a function of Trump dislike (now using logistic regression instead of a LPM). Control variables are all held at their means or modes.

reptrumpdropout042218

The probability that a Republican with a very favorable view of Trump drops out of the survey is 0.37, while the probability that a Republican with a very unfavorable view of him drops out is 0.45. Again, while not a substantial increase going from most to least favorable toward Trump, the tendency remains clears–Republicans more unfavorable of Trump dropped out of the panel at a higher rate. Perhaps this results from Republicans who dislike Trump feeling uncomfortable with and avoiding political self-expression, such as by taking a political survey. If that’s the case, this could also be affecting Trump approval surveys more broadly, though that’s more speculative. At the very least, the implications for my earlier analysis are clear: Trump’s approval rating numbers are generally reliable (not suffering from a serious endogenous partisanship problem), but they may still be a little artificially inflated because of this panel attrition problem where Republicans who dislike Trump have become somewhat less willing to take surveys during his presidency.

Republican Trump Support and Survey Panel Attrition

Does Endogenous Partisanship Distort Trump Approval Numbers among Republicans?

Donald Trump often receives high approval marks from members within his own party, a sign many interpret as a forceful demonstration of strong party loyalty in the current age. Moreover, many view this strong base support as a constraint on other Republican elites; despite a tumultuous presidency, elected Republican officials must heed the opinion of their rank-and-file and cannot abandon Trump. However, many have also raised questions over the reliability of these intra-party approval numbers. Specifically, a key question–that I have tried to speak to before–is whether partisanship is endogenous to Trump approval. If original Republicans who approve of Trump continue to identify as Republican but original ones who disapprove of Trump start to eschew this label, then this creates a misleading portrait of base support. In such a case, Republican identification starts to become inseparable from support for Trump, and thus party breakdowns of Trump approval lose meaning.

Recent panel survey data from Democracy Fund’s Voter Study Group can shed light on this important question. I’ve used this data often, but crucially, this recent release includes a wave in July of 2017. It of course retains its panel structure, providing measures of partisanship in December of 2011, December of 2016, and July 2017 on the same 5,000 individuals. Below, I break down Trump approval (approve and disapprove percentages) asked in the July 2017 wave by these three measures of partisanship. This allows for an intra-party measure of Trump support that few if any current polls can manage–specifically, answering the question of how differently Republican Trump approval (during his presidency) would look like among individuals who originally identified as Republican (i.e., an earlier measure of partisanship, such as in 2011).

trumpapprendog041718

The key comparison is between 2017 Trump approval among “original Republicans” (individuals who identified as Republicans in 2011) and 2017 Trump approval among “current Republicans” (those a part of the party in 2017). The more favorable current Republicans’ approval is than original Republicans’ approval, the more evidence accrues in favor of the theory of endogenous partisanship inflating Trump’s base support. If approval ratings among the two groups are similar, then perhaps endogenous partisanship does not present much of a concern. Approval is indeed worse when using this original party ID measurement (79 percent) than when using contemporaneous partisanship to break down Trump approval (83 percent). These percentages are statistically significantly different at p < .01.

However, in a substantive sense, these approval levels are very similar. About four of five Republicans approve of Trump regardless of whether current or earlier measures of people’s partisanship are used. This suggests that partisanship for the most part is not that endogenous to Trump approval, and that high base support for Trump observed in current approval polls during his presidency is not inflated (e.g., HuffPost Pollster data on Republican approval of Trump currently pegs him at 83 percent approve/15 percent disapprove). Caveats–this is just one survey, the most recent Trump approval measure is from July 2017 and things may have changed since then, and possible survey attrition interfering with results–are always in order, but political observers should largely view Trump base support numbers as meaningful.

Does Endogenous Partisanship Distort Trump Approval Numbers among Republicans?