Survey Nonresponse Bias, Declining Social Trust, and 2016 Polling Error

Response Rate Decline and Resulting Bias

The decline of response rates is one of the most serious methodological and even existential problems that survey research faces today. Interview yield rate–the percent of people who complete a survey out of all those sampled to take it–has been in a steady decline in the last few decades. Pew reports a precipitous response rate fall from 36 percent in 1997 to nine percent in 2012.

This introduces the potential for nonresponse bias, a phenomenon wherein people and their responses to surveys may be systematically different from those who do not response and their answers (that go unrecorded). While this bias cannot be directly calculated (because you do not have responses from the non-responding population), comparing low and high response rate surveys–and their demographic and response compositions–moves us closer to an answer. The largest well-established bias is that people who have disproportionately high levels of civic engagement select into taking surveys. In other words, on measures of volunteerism, community involvement, contacting public officials, and other political activity, surveys tend to display a much more engaged population than in reality.

As response rates decline even further, the assumption needed for unbiased results–an indistinguishable population that responds and one that doesn’t–becomes all the more tenuous. More importantly, it could start to expand to characteristics beyond just those that relate to civic engagement. Pew most recently did not find significant nonresponse biases on metrics of political orientation like partisanship. With nonresponse bias widely considered as one possible source for state polling error in 2016, it’s not far-fetched to believe the previous little of evidence of bias along political beliefs could be changing in the context of this past election cycle, or at least consider that possibility.

The Role of Social Trust in Survey-Taking

One factor that could help explain this declining willingness to take surveys and the accompanying potential for unreliable survey results involves changes in social trust. Clare Malone at FiveThirtyEight highlighted the overall declining trust in American social institutions as recorded by Gallup. Since the early- to mid-2000s, Americans have reported lower levels of confidence in media, banks, Congress, and other democratic institutions. These trends are particularly significant considering that the winning campaign of the 2016 presidential election played to and capitalized on much of this distrust and disillusionment with the social system.

Given that response rates have been sinking before trust in institutions truly began to erode, the relationship–on a qualitative level–is not overwhelmingly strong. However, there could still be a mechanism at play here, and one that possibly has grown over time: the effect of social trust–and social capital more broadly–on survey response rates. Some existing evidence that speaks to this question comes from the Democratic firm Civis Analytics, which has been sounding the alarm about survey response rates well before last year’s Election Day. After the election, Civis made the clear connection between the “Bowling Alone” voter–a term based on Robert Putnam’s book on declining social capital in the U.S.–and pre-election polling error.

While uncertain if nonresponse bias or coverage bias (the inability of surveys to reach certain groups) was more at work, Matt Lackey of Civis stressed that polls were failing to capture the opinion of a certain segment of voters in one way or another. This segment is one Putnam describes as composed of more blue-collar whites, who might have faced economic difficulties (related to industrial changes and globalization) and been uprooted from their homes. Most importantly, this group showed increasingly declining levels of social trust and community ties, all of which falls under social capital traits. Certain qualities would suggest this group was more disposed to voting for Donald Trump. Given their absence from polling, this would obviously introduce error in pre-election polls. As David Martin, also of Civis, points out, survey-taking willingness–in a context where compensation is often not provided for taking a poll–depends largely on an individual’s sense of “civic duty” and “social obligation.” When that’s missing or in low amount, a clear mechanism for survey-taking refusal begins to form.

Martin and a co-author, Benjamin Newman, also examine the association between social capital and an activity very related to survey-taking: Census participation. The two found that Census response rates were strong positive predictors of different measures of social capital, such as trust in and interaction with one’s neighbors. The supposed “cause-and-effect” is the inverse of what I suggested before, but the strength of the relationship here is what matters and lends support for this conceptualization of what drives survey-taking. (Moreover, if any causal link existed, it would not be greater Census response rates causing social trust to increase, but rather the other way around.) Older studies on this issues also frame survey participation–in censuses or not–as a type of “community involvement” and “civic obligation.” Changes in this type of social capital would thus clearly have implications on survey response rates, and especially in the context of the noted decline in these rates.

Social Trust Levels Over Time

It’s difficult to directly test this idea; instead, I turned the General Social Survey to at the very least document this development. Specifically, I wanted to check responses to the question about whether most people can be trusted, which the GSS has asked in several years from 1972 to 2014. One key caveat should be kept in mind. Using survey data to inform this debate presents a problem, as this data gleans information from people who still respond to a survey, ignoring the main object of interest here: people who refuse to take surveys and their levels of social trust. However, I consider the data here to still be informative of the overall trend, and if anything captures a conservative estimate. If levels of trust from those who refuse to take surveys were somehow included in this analysis, the pattern of declining social trust–as well as overall levels of distrust–would only be greater. With that in mind, here are the rates at which all GSS respondents say most people can be trusted (in green) and cannot be trusted (in orange) over time (with another response option that stays about constant and small, “Depends,” not included):

trust1.3-22-17

From 1972 to about 1990, trust declines and distrust increases a little bit but overall, responses remains fairly stable. Over the last couple decades, however, social trust drops dramatically. From 1972 to 2014, the percent saying they cannot trust most other people increases from 50.0 to 64.7 (+14.7 percentage points) and those saying they can trust others falls from 46.3 percent to 30.3 percent (-16 points). This is consistent with previous claims of a serious decline in this one key aspect of social capital.

Social Trust Decline By Political Orientation

Next, I break up those same results by party identification, plotting rates of social trust and distrust among the three main partisanship groups: Democrats (with Democratic leaners), Independents, and Republicans (with Republican leaners):

trust.pty.3-22-17

By far, the largest decline in social trust occurs among unaffiliated self-identifying Independents. The percentage of Independents saying they “cannot trust” most other people rises from 48.2 percent in 1972 to 70.2 percent in 2014, a 22 percentage point increase. The rate of social trust in others sunk as low 18.3 percent among Independents in 2010, starting from a high of 46.5 during this time frame.

Importantly, there are some signs pointing to greater declines in trust among Republicans than among Democrats during these last four decades. Although Republicans still express higher levels of trust at most points, they undergo greater change. While the percentage of Democrats saying they cannot trust most other people increase by 11.2 percentage points from 1972 to 2014, Republicans grow distrustful at a much faster rate in showing 17.7 point jump in distrust. Similarly, while 12.1 percent fewer Democrats say they can trust most other people over this time span, 19.3 percent fewer Republicans say the same. Thus, social trust has declined at a higher rate among Republicans than among Democrats over time.

Assuming social trust impacts survey response rates, this difference in trends could prove very consequential. Nonresponse rates matter less when they’re evenly distributed across different values of a variable that might be tied to an outcome you’re interested in–like vote choice. This latter assumption is especially important for something like political orientation (e.g. partisanship or ideology) which qualifies as more of a latent variable and thus a measure that researchers cannot reliably weight on to effectively root out bias. When response rates–perhaps driven by trust–change more based on different values of a variable–a result hard to adjust for–biased survey results become all the more possible. Accordingly, Lackey observed the following dynamic during the 2016 campaign:

  • “What we found this year is there is a difference between those who took surveys and those who didn’t. People who took these surveys were more supportive of Hillary Clinton and Democrats.”

This sounds an awful lot like the important concept of partisan differential nonresponse, which was often emphasized during the 2016 election cycle and highlighted the different likelihoods of Democrats and Republicans to respond to polls. For the most part, it’s the same idea, but I think it’s worth noting one distinction. The type of differential nonresponse supposedly driven by decades-long shifts in social trust levels seems more persistent and longstanding, having less to do with fluctuations in a particular election season. I would consider this different from the kind of selection in and out polls depending on events during the campaign that Doug Rivers and others describe.

Graphing these social trust rates but broken up by three self-reported ideological groups reflects similar patterns (in terms of implications on a right-left wing spectrum), as shown below. While the percentage of liberals from 1975 to 2014 saying most people cannot be trusted grew by 3.8 points, the same rate of distrust among conservatives increased by 10.8 points. This is hardly unexpected given how correlated partisanship and ideology have become and given what the previous plot displayed. Still, it goes to show the potential impact varying trends in social trust could have on producing non-response biases that involve latent political orientations and thus biases that are not easily correctable.

trust.ideo.3-22-17

The Case of Survey-Taking Among Republicans and Conservatives

The above two graphs and findings quantify the greater increase in distrust among Republicans and conservatives. Taken in conjunction with research relating social capital levels to survey response rates, it’s very reasonable to consider distrust as a mechanism for declining response rates and growing differential (partisan) nonresponse in particular. The influence of rising social distrust–of people generally but also of social institutions–among those on the right-wing is not unprecedented. Concerns over the ills of “big government,” intrusion into daily life, and violation of individual privacy exist more commonly among conservatives. While a bit speculative, this notion is generally pretty accurate. In this sense, contacts about taking surveys–which of course necessitates revelation of personal information–could be received more poorly and as acts of intrusion by conservatives. Perhaps distrust of others in this way had made Americans on the right less likely to respond to pollsters soliciting survey responses.

Along similar lines, when I first started to consider non-response bias as a source for 2016 polling error back in November, I recalled the efforts to cut administration of the American Community Survey that came from Republican lawmakers. In 2012, the Republican-led House voted on eliminating the ACS. Here are some descriptions of the motivations behind these efforts (emphasis mine):

  • “This is a program that intrudes on people’s lives, just like the Environmental Protection Agency or the bank regulators,” said Daniel Webster, a first-term Republican congressman from Florida who sponsored the relevant legislation.
  • Mr. Webster says that businesses should instead be thanking House Republicans for reducing the government’s reach. “What really promotes business in this country is liberty,” he said, “not demand for information.”

Even within the last few years, Republican lawmakers continue to be the ones pushing to curb survey research, specifically viewing and criticizing the ACS as an invasion of privacy. A 2015 FiveThirtyEight piece by Ben Casselman on similar issues in Canada touched on the efforts by Texas Congressman Ted Poe in the U.S. Poe has frequently introduced a bill in congress to make the mandatory ACS a voluntary survey, a change that could seriously damage the quality of data which many rely on for important decision-making–an experience that Canada had to suffer with one of its major household surveys. The language used in support by Poe reinforces the comments made by Webster above, speaking to the role of right-of-center ideology in refusing to take surveys. Again, the points of emphasis from Poe are the “government-mandated” nature of the ACS, that “the government will come after you” if the ACS is not taken, and that the ACS constitutes “another example of unnecessary and completely unwarranted government intrusion.”

Especially in recent years, the ACS has become increasingly tied into conservative ideological framework’s negative perception of big government, with personal questions seen as intrusions into daily life and as governmental overreach. Evidence of this directly involves only the ACS, which makes sense given it is conducted by a governmental organization. However, I would consider it very likely that the same linkage drawn by conservatives extends to political surveys more broadly, such as pre-election polls; in other words, all types of surveys represent intrusive violations of privacy at odds with key conservative ideological tenets. Notably, the remarks from conservatives that I’ve noted here come from elites (congresspeople) and not from the masses in the form of opinion polling, for example. But given the well-established dynamic of the public’s tendency to often follow cues from elites concerning an issue and to adopt positions from elites of the same partisan stripes, it seems likely that the public–specifically the Republican/conservative rank-and-file on the receiving end of these cues–is also coming to view the ACS and all surveys in a similar light. In this (speculative) sense, right-wingers could be growing increasingly less willing to partake in surveys as questions and debate surrounding matters like the ACS become more common. Consequently, this would lead to non-response biases that involve political orientations. While a bit removed, the notion of social trust and capital could still be at play here in this expansion of where conservative ideology is applied.

Trust Among Non-College Whites and Polling Error

Finally, I wanted to check the same social trust data from before but along non-political variables–namely, the combination of two variables that made for one of the most crucial demographic groups in this past election: non-college-educated whites. This segment of the population swung strongly toward the Republican column with Trump as the party’s presidential candidate. Here’s how the group’s level of social trust changed over the last few decades:

trust.whitenoncoll.3-22-17.png

The pattern of sizable declines in social trust/increases in social distrust mirror those seen in the first graph above for all GSS respondents. However, the decline in trust among this subgroup occurs at a higher rate. While the percentage of all respondents saying cannot trust most people increases by 14.7 points over this time period, non-college whites grow 22.3 percentage points more distrustful during the same time. Again, given there’s likely some (though hard to quantify) impact of social trust on survey response rate, this faster growing distrust among non-college whites could also imply nonresponse rates among this group that are growing faster than among the entire population. Nonresponse bias (and changes in this bias over time) could be playing a particularly important role among non-college whites, who also happen to have assumed an even greater importance in deciding the 2016 election. Correcting for demographic biases is within the reach of pollsters. Though I’m not entirely certain of this, while pollsters can weight on race (white percentage) and education (non-college percentage) individually, they likely often do not weight on their interaction (white non-college percentage). Perhaps this explains why a demographic nonresponse bias such as this one can persist even after statistical adjustments.

Considering all of this in tandem could begin to shed light on where and why polling error occurred in 2016. Take the following graph of non-college-educated white percentage of a state’s white population and absolute polling error in Clinton margin as one indicator:

whitecoll.pollerror.3-23-17

The relationship between non-college whites and polling error in a state is fairly strong at the bivariate level; the adjusted R-squared here is 0.37, a good amount for just one variable explaining another. This assessment is very ecological in nature, and thus should be observed with caution. However, the relationship does speak to an mechanism in place that could be linking a particular demographic to not taking surveys and thus causing inaccuracy in polling. With its social trust declining at a faster rate than the overall population, non-college educated white Americans might also be increasingly growing less likely to take surveys. Correcting for this group’s exclusion in surveys is not always straightforward, introducing the potential for non-response bias in survey results. When this group significantly changes its vote preference to the extent of “vot[ing] like a minority group,” non-response bias could lead to the type of polling error that both public and private surveys suffered from during the 2016 election.


6/22/17 edit: For the graphs about the social trust question from the GSS, I used the response options laid out in the GSS data explorer and not the actual ones to this question in the survey (see page 387 here). This does not change the meaning of these graphs by much, as the “Can trust” vs. “Cannot trust” options capture the difference in response to the question well and more clearly.

Advertisement
Survey Nonresponse Bias, Declining Social Trust, and 2016 Polling Error

16 thoughts on “Survey Nonresponse Bias, Declining Social Trust, and 2016 Polling Error

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s