Is the Relationship between Police Attitudes and Hispanics Shifting to Trump in 2020 Distinctively Strong?

In the aftermath of the 2020 election, many have highlighted the Republican Party’s gains among Hispanic voters and sought to answer the question of what caused them. One influential account has centered on conservative ideology and police-related opinions, as summarized in the following tweet and article screenshot:

This account holds that the salience of issue considerations were heightened by Black Lives Matter protests in the summer preceding the general election season, movements like “defund the police” that came out of it, and general public discourse on this issue. Hispanics, perhaps, were most sensitive to these considerations being raised, and those with conservative views — generally speaking and specifically on police-related attitudes — became especially likely to switch to Trump after not having voted for him four years earlier.

Research Questions and Data/Methods

With the recent release of major post-2020 election academic surveys, I wanted to use one — the Cooperative Election Study (formerly the CCES) — to both replicate and extend tests of this theory for Hispanic vote switching. Replication in general is valuable as a way to see how real and generalizeable a finding really is; the more we see the same pattern come up across different datasets, the more confident we should be in its veracity. I also wanted to check two other parts of this broader argument:

  1. Was the relationship between police attitudes and vote switching unique to Hispanics? This wasn’t explicitly claimed, but it’s an important part of the story. If voters of all races are switching sides based on police-related attitudes, then it’s not clear why this account should revolve around Hispanic voters in particular.
  2. Were police attitudes more strongly correlated with vote switching than other unrelated attitudes were? This was explicitly claimed, so it’s worth comparing predictive strength of switching across various issue areas (among Hispanics and other racial groups).

I test this theory by running OLS regression models (with robust standard errors) modeling 2016-20 Trump vote switching. I run separate regressions for 1) four different major racial groups (Whites, Blacks, Hispanics, and Asians) and 2) for each attitudinal predictor of interest (which includes police-related and non-police-related ones). In all models, I include a standard set of controls. All right-hand side variables are recoded to 0-1 scales where higher values indicate more conservative attitudes and come from the pre-election survey wave. I also subset to individuals who didn’t vote Republican in 2016, representing people eligible to switch votes at all (past academic work on vote switching does the same). Here are more details on how I make use of the CES data:

  • Outcome
    • Vote switching, taking on a value of 1 if 2016 non-Trump voters voted Trump in 2020 and 0 if 2016 non-Trump voters didn’t vote Trump in 2020
    • 2016 vote choice is recalled, asked in the pre-election wave
    • 2020 vote choice is contemporaneous, asked in the post-election wave
  • Predictors
    • Key attitudes:
      • Self-described ideology
        • Five-point scale
        • Not tied to police views, but part of broader argument (see above)
      • Pro-police attitude index
        • Average of 7 CES items that tap into police-related attitudes
        • Part of CC20_334* variables (excluding first that isn’t police-related)
        • These items ask about requiring body cameras, increasing/decreasing the number of police, banning chokeholds, creating a national registry of police who’ve been investigated, ending DoD program sending surplus weapons to police departments, and allowing families to sue officers for damages (see codebook for specific wordings)
      • Police safety
        • Question on whether police makes you feel safe
        • Four-point scale, mostly safe to mostly unsafe
      • Opposition to decreased police funding
        • This is a single item from the 7 mentioned above that most closely gets at opinions on “defunding the police”
        • Respondents are asked if they support or oppose the following: “Decrease the number of police on the street by 10 percent, and increase funding for other public services”
    • General (unrelated) attitudes:
      • Always allow women to obtain abortion as a matter of choice
      • Ban assault rifles
      • Support increase spending on border security by $25 billion, including building a wall between the U.S. and Mexico
      • Support tariffs on $200 billion worth of goods imported from China
      • Give the Environmental Protection Agency power to regulate Carbon Dioxide emissions
      • Expand Medicare to single comprehensive public health coverage program covering all Americans
  • Controls
    • Female
    • College education
    • Age group (4-category)
    • Region (city, suburb, town, rural area, other)
    • Self-described “born-again” or evangelical Christian
    • Degree of interest in government and public affairs
    • Party ID (7-point)

Results

The below two plots contain all the relevant results, the first for police-related attitudes (and overall ideology, which part of the original argument) and the second for general and unrelated attitudes. Each contains results for the different attitudinal predictors and among each racial group. Regression coefficients (with 95% confidence intervals) from each model are shown, and can be interpreted as the increase — or decrease — in probability of switching to a Trump vote in 2020 after not having voted for him in the 2016 election.

The first graph shows fairly similar trends in results across each key predictor. Each attitude is often correlated with vote switching at a statistically significant level (the confidence interval doesn’t cross zero), though the strength of this relationship varies across racial group. For Blacks, it’s fairly weak, but for Whites, Hispanics, and Asians, it’s all about equally as strong. If anything, these associations tend to be stronger among Whites, though I’m not directly testing this (where an interaction model would be needed). Eyeballing alone, though, can pretty clearly tell us that the association between ideology/police-related attitudes and vote switching is not distinctively strong among Hispanic individuals. For example, going from support of to opposition to decreasing the number of police increases the probability of switching to Trump among Hispanics by 0.09 points, but that increase is basically the same among Whites (0.10) and among Asians (0.08).

The below graph turns attention to other issue areas that are unrelated to police attitudes. The same type of trend appears here too, where the issue position vs. vote switching relationship tends to be stronger among non-Black groups. The main takeaway, though, is that there are many other non-police-related considerations that are associated with switching to Trump among Hispanics (and other racial groups for that matter). I would be cautious with directly comparing the size of the relationships across the two graphs, as comparisons require direct statistical tests (e.g. testing equality of coefficients) among other things. The main purpose of these results is to show police attitudes do not distinctly predict vote switching among Hispanics; border security support, for example, appears to matter a great deal too, among other factors. But perhaps more importantly, the predictive strength of these issue considerations — whether related to police or not — are far from unique to Hispanic voters.

To summarize…

  1. The relationship between police attitudes and 2020 Trump vote switching is not unique to Hispanics.
  2. Police attitudes, compared to non-police attitudes, are not consistently stronger predictors of vote switching among Hispanics (or any racial group).

Limitations

In closing, there are a few important limitations to this analysis to keep in mind:

1. The CES data likely is not perfectly representative of all Hispanic voters; it will be useful to replicate the analysis here on other datasets and especially those from surveys that are better equipped to survey Hispanics in particular (e.g. like the CMPS).

2. I had to rely on recalled vote choice for my outcome measure. This is not perfect, as people may misremember or intentionally misreport their vote choice in a presidential election four years ago. However, it’s been reliably used in prior academic work that seeks to explain vote switching, and there are other data points that suggest people’s recall ability is highly accurate (I have research in the works on this issue that similarly supports the accuracy and reliability of vote recall measures).

3. The CES data does not yet include a measure of validated voting — whether or not individuals actually did turn out to vote in the past election based on voter file records (see here for more). Thus, analysis here will inevitably include some individuals who did not actually vote in 2020 but nevertheless report a vote choice. It will be important to redo this analysis among only validated voters once that data becomes available. However, it’s not clear if results would change much. While turnout misreporting matters greatly for explaining outcomes that center on turnout, it might be less problematic for outcomes that center on vote preference (turnout overreport doesn’t vary much by party, and in other work I’ve done validating survey-based vote preference data against real world outcomes, accounting for validated turnout makes little difference).

4. In general, this is not a good way to understand how issue considerations relate to voting behavior. Using cross-sectional data like I do here will almost surely overestimate the importance of issues for vote choice due to the tendency of voters to bring their issue opinions in line with the positions of the candidate that they vote for. Vote choice in the 2020 election is technically measured at a time after issue opinions are recorded, but most voters already develop vote preferences by the time they take the pre-election survey — when issues were asked too — and so issue positions were likely updated to better accord with candidate support anyways. Replicating this analysis but with panel data (and specifically, issue positions asked in 2016) will be important. For now, it’s important to keep this point in mind when interpreting the size of regression coefficients presented above; the strength of these relationships are likely inflated, and in reality, issue positions don’t matter so much for vote switching. In terms of implications for analysis here, this opinion updating dynamic would undermine my results if it varies substantially across race of an individual (no strong reason to believe this) or type of issue (police-related issues likely received more attention in 2020 than many other ones, so if anything, the importance of these variables for vote switching might be especially inflated).

5. The specifics of the outcome, predictors, and other aspects of my analysis here are very likely not identical to those used in other work on this topic. In this sense, what I’m doing is not an exact replication but more of a conceptual one. Differences in results may not mean the broader theory is wrong or right necessarily. At the same time, I’d still argue this is a valid test of the theory; if the theory doesn’t find empirical support in other samples and with distinct but related enough measurement/analytical approaches, then we should question the strength and generalizeability of that theory.

Is the Relationship between Police Attitudes and Hispanics Shifting to Trump in 2020 Distinctively Strong?

2020 Election Changes in Turnout along Vote Choice Distribution at County Level

More here.

Edit 11/17/20:

I caught a small error in the data going into the above graphs. The variable I thought was total votes cast in a county in 2020 did not actually equal the sum of all individual candidate votes (there’s no codebook or anything similar to check this). Fortunately, these two total votes variables were almost identical (correlated at > 0.999) so this didn’t change the overall trend in results (two outlier ratio values no longer remained in the data and so the y-axis span changes). I reproduce the above graphs with the correct measure, along with updated voting data, below.

Edit 11/19/20: Here’s another graph looking at how turnout and vote correlate at the county level, but this time measuring both in terms of changes from 2016 to 2020.

2020 Election Changes in Turnout along Vote Choice Distribution at County Level